banctilrobitaille / torch-vectorized

Fast analytical implementation of batch eigen-decomposition for 3x3 symmetric matrices with Pytorch. > 250x faster than regular Pytorch implementation of batch eigen-decomposition on GPU.

Home Page:https://torch-vectorized.readthedocs.io/en/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Getting negative eigenvalues for PDS matrices

zhesu1 opened this issue · comments

Always getting negative eigenvalues for randomly generated PDS matrices when one eigenvalue is very small and close to 0 or very big. Any way to deal with this situation? The following is one specific example that gives a negative eigenvalue and the eigenvalue is not really close to 0.

T = torch.tensor([[ 1.3999e+00,  1.5765e+00, -5.5419e+03],
        [ 1.5765e+00,  2.1994e+00, -7.3147e+03],
        [-5.5419e+03, -7.3147e+03,  2.4693e+07]], dtype=torch.float64) 
vlinalg.vSymEig(T.reshape((1,9,1,1,1)), eigenvectors=False)[0]

The result is
tensor([[[[[-7.9096e-02]]], [[[ 2.6781e-01]]], [[[ 2.4693e+07]]]]], dtype=torch.float64)

But if we use the build-in function in Pytorch:

torch.symeig(T, eigenvectors=True)[0]

Then the eigenvalues are all positive:
tensor([4.5855e-03, 1.8413e-01, 2.4693e+07], dtype=torch.float64)

Hello @zhesu1, thank you for reporting this issue. I played a bit with your example and it seems that its related to numerical approximation imprecision when computing the second eigenvalue (which also impact the third):
eig_vals[:, 0, :, :, :] = q + 2 * p * torch.cos(phi)
eig_vals[:, 1, :, :, :] = q + 2 * p * torch.cos(phi + pi * (2.0 / 3.0))
eig_vals[:, 2, :, :, :] = 3 * q - eig_vals[:, 0, :, :, :] - eig_vals[:, 2, :, :, :]

From what I've seen, the amount of digits used in the approximation of pi can influence the result quite a bit when dealing with big number (as in your case). I'll investigate how I can improve the computation accuracy.