why dmi loss need to add '0.001' to det(mat)+0.001
yyht opened this issue · comments
hi, since it needs torch.log to det(mat), usually we add 1e-10 to achieve numerical stability, why in this loss, we need to add 1e-3?
Numerical Stability. To avoid degenerate case of det(mat)=0 thereby resulting in a log(0) scenario.