Reexamining the principle of mean-variance preservation for neural network initialization

Before backpropagation training, Christmas Ornaments it is common to randomly initialize a neural network so that mean and variance of activity are uniform across neurons.Classically these statistics were defined over an ensemble of random networks.Alternatively, they can be defined over a random sample of inputs to the network.

We show analytically and numerically that these two formulations of the principle of Instant Camera Accessories mean-variance preservation are very different in deep networks using rectification nonlinearity (ReLU).We numerically investigate training speed after data-dependent initialization of networks to preserve sample mean and variance.

Leave a Reply

Your email address will not be published. Required fields are marked *