HIPS / autograd

Efficiently computes derivatives of numpy code.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

squared L2 norm NaN at 0

zyzhang1130 opened this issue · comments

import autograd.numpy as np
from autograd.numpy import linalg as LA
from autograd import grad 

def l2norm(x):
    return LA.norm(x)**2
grad_l2 = grad(l2norm)

print(grad_l2(np.array([0.0,0.0,0.0])))

as described in the title, the output is '[nan nan nan]', where it should be '[0, 0, 0]' instead

Even simpler, autograd can't do something like:

def ex(x):
    return np.sqrt(x) ** 2

grad(ex)(0.)

> ZeroDivisionError: 0.0 cannot be raised to a negative power

and also:

def ex(x):
    return 1/(1/x)

grad(ex)(0.)

> nan

It should be more clear why my example fails, and the same idea applies to your example. I think this is a general limitation of auto-diff, not just this library, in that auto-diff won't make a symbolic simplification for you.

Your example can be solved by just using:

def l2norm(x):
    return (x**2).sum()

thanks for the info. it is my first time using any auto-diff library.