jimfleming / differentiation

Implementing (parts of) TensorFlow (almost) from Scratch

Home Page:http://jimfleming.me/differentiation/main.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Biases are being broadcasted

983 opened this issue · comments

commented

When two matrices of shape (4, 4) and (4,) are being added, the (4,)-shaped matrix will be broadcasted to the shape (4, 4) by repeating their rows:

>>> A = np.zeros((4, 4))
>>> biases = np.array([1, 2, 3, 4])
>>> C = A + biases
>>> C
array([[ 1.,  2.,  3.,  4.],
       [ 1.,  2.,  3.,  4.],
       [ 1.,  2.,  3.,  4.],
       [ 1.,  2.,  3.,  4.]])

This causes trouble during backpropagation because both A and biases will receive a matrix of shape (4, 4) although the biases are only of shape (4,).
If the biases are then updated like biases = biases - grad, the biases will be broadcasted to shape (4, 4).

I tried to implement neural networks in a similar way and made the same mistake, but numpy caught it because I wrote biases -= grad instead which will throw an error instead of broadcasting.

I think the solution might be to squish the matrices back to their source shape when backpropagating:

>>> np.sum(C, 0)
array([  4.,   8.,  12.,  16.])

but it does not seem very elegant to me to do it this way and I also tried this network for the mnist data set and got a warning that exp overflowed, so I might be wrong everywhere.
EDIT: Some other guy is doing a np.mean instead.

If the following code is inserted at
https://github.com/jimfleming/differentiation/blob/master/main.py#L106
it can be seen that the biases change their shape from (4,) to (4,4) and from (1,) to (4,1).
This makes it too easy to train the network because there now is a free variable for every point of data at the end of the neural network.

    print(sess.state[biases0])
    print(sess.state[biases1])

Weird, never got a notification about this. Good catch, that seems like something np should at least warn about... I'll see if I can fix soon

EDIT: Spent some time looking into this. The fix is to include shape information as the gradients for add/sub requires reductions to invert broadcasting. Unfortunately, that's outside of the scope for this demo. I think the quick fix is to simply remove biases, which aren't required anyway.