khalooei / ALOCC-CVPR2018

Adversarially Learned One-Class Classifier for Novelty Detection (ALOCC)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

refinement loss

sertaca opened this issue · comments

Dear Sir,
In the paper refinement loss is defined as euclidean loss but in the tensorflow code it is defined as crossentropy loss why is it different?
Also did you try L1 loss ?
Thanks

Dear Sir,
In the paper refinement loss is defined as euclidean loss but in the tensorflow code it is defined as crossentropy loss why is it different?
Also did you try L1 loss ?
Thanks

Dear @sertaca, I'm so sorry for late reply because I have a lot of tasks now and don't have enough time to pay more attention to that. BTW, your question is really nice and I was appreciated which this important point is notified by your question. As I said before in late issues, this is not the final version of our implementation and we have just released one cleaned version of our research in a public place like GitHub for any researcher who wants to follow our ideas and don't get stuck on coding step and it just accelerate the coding for them.
For your other question, Yes, we use different losses of each part of our network (R and the overall loss) and I think in that moment, Euclidean loss was working for our cases well in R network. We could use any loss for comparing the input and the output of the first branch of our network (refinement ~ R). It is basically minimizing the sum of the absolute differences between the target value (in our case is defined as our clean R input) and the estimated value (refined R output).