Oleg Zabluda's blog
Tuesday, June 13, 2017
 
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) (2016) Djork-Arné Clevert, Thomas...
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) (2016) Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
"""
Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. [...] In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
[...]
Units that have a non-zero mean activation act as bias for the next layer. If such units do not cancel each other out, learning causes a bias shift for units in next layer. The more the units are correlated, the higher their bias shift. We will see that Fisher optimal learning, i.e., the natural gradient (Amari, 1998), would correct for the bias shift by adjusting the weight updates. Thus, less bias shift brings the standard gradient closer to the natural gradient and speeds up learning. We aim at activation functions that push activation means closer to zero to decrease the bias shift effect. Centering the activations at zero has been proposed in order to keep the off-diagonal entries of the Fisher information matrix small (Raiko et al., 2012). For neural network it is known that centering the activations speeds up learning (LeCun et al., 1991; 1998; Schraudolph, 1998). “Batch normalization” also centers activations with the goal to counter the internal covariate shift (Ioffe & Szegedy, 2015). Also the Projected Natural Gradient Descent algorithm (PRONG) centers the activations by implicitly whitening them (Desjardins et al., 2015).
[...]
The bias shift (mean shift) of unit i is the change of unit i’s mean value due to the weight update. Bias shifts of unit i lead to oscillations and impede learning. See Section 4.4 in LeCun et al. (1998) for demonstrating this effect at the inputs and in LeCun et al. (1991) for explaining this effect using the input covariance matrix.
"""
https://arxiv.org/abs/1511.07289
https://arxiv.org/abs/1511.07289

Labels:


| |

Home

Powered by Blogger