Oleg Zabluda's blog
Friday, March 02, 2018
 
Train longer, generalize better: closing the generalization gap in large batch training of neural networks (2017)...
Train longer, generalize better: closing the generalization gap in large batch training of neural networks (2017) Elad Hoffer, Itay Hubara, Daniel Soudry
"""
We find that the weight distance from its initialization grows logarithmically with the number of weight updates. We therefore propose a "random walk on random landscape" statistical model which is known to exhibit similar "ultra-slow" diffusion behavior. Following this hypothesis we conducted experiments to show empirically that the "generalization gap" stems from the relatively small number of updates rather than the batch size, and can be completely eliminated by adapting the training regime used. We further investigate different techniques to train models in the large-batch regime and present a novel algorithm named "Ghost Batch Normalization" which enables significant decrease in the generalization gap without increasing the number of updates. To validate our findings we conduct several additional experiments on MNIST, CIFAR-10, CIFAR-100 and ImageNet. Finally, we reassess common practices and beliefs concerning training of deep models and suggest they may not be optimal to achieve good generalization.
"""
https://arxiv.org/abs/1705.08741
https://arxiv.org/abs/1705.08741

Labels:


| |

Home

Powered by Blogger