Presenting the Importance of Random Initialization of the Weights

The problem of weights initialization is explained here.

“This turns out to be a mistake, because if every neuron in the network computes the same output, then they will also all compute the same gradients during backpropagation and undergo the exact same parameter updates. In other words, there is no source of asymmetry between neurons if their weights are initialized to be the same.”

Basically, if done improperly, it would result in serious problems with learning features. This post is intended to provide some simple evidence of the importance of the asymmetry in weights initialization.

Configuration of the neural network:

Learning loop:

Predictions if initialized with assymetrical weights:

Predictions if all weights are initialized with 0.1s:

After 10,000 iterations the network failed to solve simple XOR problem — embarrasing, kind of.

de92b82ad724fb06d13a7ca60226a4c2-e1500839903780.jpeg

Complete code:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s