HIPS / autograd

Efficiently computes derivatives of numpy code.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

autograd return nan with to norm function

giangbang opened this issue · comments

Hi, I'm using autograd to calculate the gradient of an l2 norm operator, the code is as simple as

def f(x):
    return np.linalg.norm(x,axis=-1)**2

f_dx = grad(f)
f_dx(np.array([[0, 0.]]))

However, when I substitute vector 0 to f, it outputs nan

>> \autograd\numpy\linalg.py:100: RuntimeWarning: invalid value encountered in scalar divide                                                          
  return expand(g / ans) * x 

When I change the code to something that does not involve using linalg, it produces 0 as usual

def f(x):
    return np.sum(np.square(x))
f_dx = grad(f)
f_dx(np.array([[0, 0]], dtype=float))
array([[0., 0.]])

return expand(g / ans) * x