Hui-Li / multi-task-learning-example-PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Homoscedastic Loss Function

nivesh48 opened this issue · comments

This isn't an issue but a doubt i would like to clarify. When i am using the homoscedastic loss for my area of research the loss values are in negatives and starting to converge in negative. Is this behavior natural for this multi task loss or am i doing any mistake?

This isn't an issue but a doubt i would like to clarify. When i am using the homoscedastic loss for my area of research the loss values are in negatives and starting to converge in negative. Is this behavior natural for this multi task loss or am i doing any mistake?

Same question. I also observed that the log_var values continue decreasing in negative, and the total loss continues decreasing. Have no idea and doubt if the model can converge.

commented

I think the problem arises from the fact that the implementation doesn't use
See Issue 3

I think the problem arises from the fact that the implementation doesn't use
See Issue 3
I dont think the formula is wrong. The uncertainty parameter is log \sigma^2. Thus,exp(-log\sigma^2)is 1/sigma^2.

Hi @zhackzey I think as Issue #3 reported,
The uncertainty parameter is σ^-2, and the uncertainty penalty is logσ, when taking the exp in the code, exp(-logσ) = σ^-1 which is different with uncertainty parameter σ^-2 in the paper.

Hi @zhackzey Just realized that the author corrected the formula in a new version of the paper, https://arxiv.org/pdf/1703.04977.pdf