Oleg Zabluda's blog
Monday, February 19, 2018
 
Use random linear interpolation for data & labels to improve stuff
Use random linear interpolation for data & labels to improve stuff

mixup: Beyond Empirical Risk Minimization (1017)
"""
mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
"""
https://arxiv.org/abs/1710.09412v1
https://arxiv.org/abs/1710.09412v1

Labels:


| |

Home

Powered by Blogger