Oleg Zabluda's blog
Thursday, October 06, 2016
 
Cocktail "KefIR" aka "Kir Cottage [kä·täZH]" aka "Farmer's Kir" recipe:
Cocktail "KefIR" aka "Kir Cottage [kä·täZH]" aka "Farmer's Kir" recipe:

1 glass of Kefir
add Crème de cassis по вкусу, чтобы кирнуть как следует.

Labels:


 
Convergent Learning: Do different neural networks learn the same representations?
Convergent Learning: Do different neural networks learn the same representations? (2015) Yixuan Li, Jason Yosinski, et al
"""
In this paper we investigate the extent to which [...] representations learned by multiple nets converge to a set of features which are either individually similar between networks or where subsets of features span similar low-dimensional spaces. We propose a specific method of probing representations: training multiple networks and then comparing and contrasting their individual, learned representations at the level of neurons or groups of neurons. We begin research into this question using three techniques to approximately align different neural networks on a feature level: a bipartite matching approach that makes one-to-one assignments between neurons, a sparse prediction approach that finds one-to-many mappings, and a spectral clustering approach that finds many-to-many mappings. This initial investigation reveals [...] (1) that some features are learned reliably in multiple networks, yet other features are not consistently learned; (2) that units learn to span low-dimensional subspaces and, while these subspaces are common to multiple networks, the specific basis vectors learned are not; (3) that the representation codes show evidence of being a mix between a local code and slightly, but not fully, distributed codes across multiple units.
[...]
1. By defining a measure of similarity between units in different neural networks, can we come up with a permutation for the units of one network to bring it into a one-to-one alignment with the units of another network trained on the same task? Is this matching
or alignment close, because features learned by one network are learned nearly identically somewhere on the same layer of the second network, or is the approach ill-fated, because the representations of each network are unique? (Answer: a core representation is shared,
but some rare features are learned in one network but not another; see Section 3).

2. Are the above one-to-one alignment results robust with respect to different measures of neuron similarity? (Answer: yes, under both linear correlation and estimated mutual information metrics; see Section 3.2).

3. To the extent that an accurate one-to-one neuron alignment is not possible, is it simply because one network’s representation space is a rotated version of another’s? If so, can we find and characterize these rotations? (Answers: by learning a sparse weight LASSO model to predict one representation from only a few units of the other, we can see that the transform from one space to the other can be possibly decoupled into transforms between small subspaces; see Section 4).

4. Can we further cluster groups of neurons from one network with a similar group from another network? (Answer: yes. To approximately match clusters, we adopt a spectral clustering algorithm that enables many-to-many mappings to be found between networks. See Section S.3).

5. For two neurons detecting similar patterns, are the activation statistics similar as well? (Answer: mostly, but with some differences; see Section S.4).
[...]
Notably, on conv1 and conv2, [...] each neuron in one network can be predicted by only one or a few neurons in another network. For the conv3, conv4, and conv5 layers, the overall error is higher, so it is difficult to draw any strong conclusions regarding those layers.
"""
https://arxiv.org/abs/1511.07543

See
http://yosinski.com/convergent

for video and slides:
Yixuan's Talk at the NIPS 2015 Feature Extraction Workshop
Jason's talk at ICLR 2016
http://yosinski.com/convergent

Labels:


 
Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition (2014) Charles...

Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition (2014) Charles F. Cadieu, et al
"""
unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.
"""
https://arxiv.org/abs/1406.3284

Labels:


 
Bay Area Deep Learning School at Stanford (Sep 24-25, 2016)
Bay Area Deep Learning School at Stanford (Sep 24-25, 2016)
http://www.bayareadlschool.org/
http://www.bayareadlschool.org/schedule

Full day livestream:
https://www.youtube.com/watch?v=eyovmAtoUx0 (Day 1)
https://www.youtube.com/watch?v=9dXiAecyJrY (Day 2)

Individual videos:
1. Foundations of Deep Learning (Hugo Larochelle, Twitter) - https://youtu.be/zij_FTbJHsk
2. Deep Learning for Computer Vision (Andrej Karpathy, OpenAI) - https://youtu.be/u6aEYuemt0M
3. Deep Learning for Natural Language Processing (Richard Socher, Salesforce) - https://youtu.be/oGk1v1jQITw
4. TensorFlow Tutorial (Sherry Moore, Google Brain) - https://youtu.be/Ejec3ID_h0w
5. Foundations of Unsupervised Deep Learning (Ruslan Salakhutdinov, CMU) - https://youtu.be/rK6bchqeaN8
6. Nuts and Bolts of Applying Deep Learning (Andrew Ng) - https://youtu.be/F1ka6a13S9I

7. Deep Reinforcement Learning (John Schulman, OpenAI) - https://youtu.be/PtAIh9KSnjo
8. Theano Tutorial (Pascal Lamblin, MILA) - https://youtu.be/OU8I1oJ9HhI
9. Deep Learning for Speech Recognition (Adam Coates, Baidu) - https://youtu.be/g-sndkf7mCs
10. Torch Tutorial (Alex Wiltschko, Twitter) - https://youtu.be/L1sHcj3qDNc
11. Sequence to Sequence Deep Learning (Quoc Le, Google) - https://youtu.be/G5RY_SUJih4
12. Foundations and Challenges of Deep Learning (Yoshua Bengio) - https://youtu.be/11rsu_WwZTc
http://www.bayareadlschool.org/schedule

Labels:


 
Ленинград - Дорожная (Ехай нахуй) (2013) сингл
Ленинград - Дорожная (Ехай нахуй) (2013) сингл
"""
Я куплю себе змею, или черепаху,
А тебя я не люблю, ехай...
Ехай нахуй!

Припев:
Ехай нахуй!
Ехай нахуй!
Ехай нахуй!
Навсегда!

Лучше отсидеть на зоне, можно и "пятнаху",
Чем с тобою фармазонить, ехай...
Ехай нахуй!

Припев:

Ехай нахуй!
Ехай нахуй!
Ехай нахуй!
Навсегда!

Раньше жил и не тужил, а с тобой дал маху,
Лучше б с черепахой жил, ехай...
Ехай на хуй!
"""
https://www.youtube.com/watch?v=VVkwoY6dQFo
https://www.youtube.com/watch?v=VVkwoY6dQFo

Labels:



Powered by Blogger