Newbeeer / L_DMI

Code for NeurIPS 2019 Paper, "L_DMI: An Information-theoretic Noise-robust Loss Function"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

why dmi loss need to add '0.001' to det(mat)+0.001

yyht opened this issue · comments

commented

hi, since it needs torch.log to det(mat), usually we add 1e-10 to achieve numerical stability, why in this loss, we need to add 1e-3?

Numerical Stability. To avoid degenerate case of det(mat)=0 thereby resulting in a log(0) scenario.

Very sorry for the huge delay. Yes, @dbp1994 gets the point. We add the small pertubation to avoid the det(mat)=0 case.