marcellacornia / mlnet

A Deep Multi-Level Network for Saliency Prediction. ICPR 2016

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

A little confused about the loss function

Time1ess opened this issue · comments

Hi,
I have some questions about your way implementing the loss function based on your paper. According to your paper, the deviation between predicted values and ground-truth values is weighted by a linear function alpha - yi, then square it to donate the error. But in your code K.mean(K.square((y_pred / max_y) - y_true) / (1 - y_true + 0.1)), you just square the numerator not the whole fraction. And also the L2 regularization term you mentioned in your paper is not in the implementation of the loss function. Am I not fully understood?

Hi,
thanks for your interest in our work.
The correct implementation of our loss function is that reported in our source code. We made a little mistake by squaring the whole fraction in our paper. The L2 regularization term, instead, is added as weight regularizer in the EltWise Product Layer, as you can see in the model.py file.

Thanks, it makes sense to me now.