Oleg Zabluda's blog
Monday, April 17, 2017
 
Siamese Neural Networks for One-shot Image Recognition (2015) Gregory Koch
Siamese Neural Networks for One-shot Image Recognition (2015) Gregory Koch,
Richard Zemel, Ruslan Salakhutdinov
https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf
https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf

Labels:


 
Rylan Schaeffer | Explanation of Neural Turing Machines
Rylan Schaeffer | Explanation of Neural Turing Machines
Graves, Wayne and Danihelka 2014.
http://rylanschaeffer.github.io/content/research/neural_turing_machine/main.html
http://rylanschaeffer.github.io/content/research/neural_turing_machine/main.html

Labels:


 
Learning To Learn Using Gradient Descent (2001) Sepp Hochreiteri, et al
Learning To Learn Using Gradient Descent (2001) Sepp Hochreiteri, et al
"""
The training data for the meta-learning system is a set of sequences {s_k}, where sequence s_k is obtained from a target function f_k. At each time step j during processing the kth sequence, the metalearning system needs the function result y_k (j) = f_k (xk (j )) as a target. The input to the meta-learning system consists of the current function argument vector x_k(j) and a supplemental input which is the previous function result y_k (j -1). The subordinate learning algorithm needs the previous function result y_k (j -1) so that it can learn the presented mapping, e.g. to compute the subordinate model error for input x_k (j -1). We cannot provide the current target y_k (j) as an input to the recurrent network since we cannot prevent the model from cheating by hard-wiring the current target to its output.
"""
http://snowedin.net/tmp/Hochreiter2001.pdf
http://snowedin.net/tmp/Hochreiter2001.pdf

Labels:


 
Кажется здесь замешаны огромные бабки

Кажется здесь замешаны огромные бабки

Labels:



Powered by Blogger