liznerski / fcdd

Repository for the Explainable Deep One-Class Classification paper

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

It seems that loss of fcdd is different from paper?

captainfffsama opened this issue · comments

fcdd_loss
In paper,"the center corresponds to the bias term in the last layer of our networks",but in code 1 2 seems we juse think c=0?

As you can see here, the conv layers have bias per default. Thus, the final conv layer of FCDD_CNN224_VGG also has bias. This holds for all networks. Besides, per default, bias is activated in all intermediate layers as well (see here) since if one uses some sort of anomalies (may them be artificial or not), weight collapse to zero is no longer an issue.

As you can see here, the conv layers have bias per default. Thus, the final conv layer of FCDD_CNN224_VGG also has bias. This holds for all networks. Besides, per default, bias is activated in all intermediate layers as well (see here) since if one uses some sort of anomalies (may them be artificial or not), weight collapse to zero is no longer an issue.

thanks for you answer and excellent work. As you say that weight collapse to zero is easy,so when we set c as any make sense value like c=1, this objective function still work?

OK, I just try set c=1 ,and run a train on my custom dataset, It seems that result is close to c=0... So maybe we set c as any make sense value, the network is still work