JDAI-CV / FADA

(ECCV 2020) Classes Matter: A Fine-grained Adversarial Approach to Cross-domain Semantic Segmentation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

There is a bit of inconsistency between the code and the paper

ALEX13679173326 opened this issue · comments

Hi,I find there is a bit of inconsistency between the code and the paper.
In codes,adversarial loss is caculated by
loss_adv_tgt = 0.001*soft_label_cross_entropy(tgt_D_pred, torch.cat((tgt_soft_label, torch.zeros_like(tgt_soft_label)), dim=1))
in which you caculate the loss between target prediction and target labels.

However,in the paper,the adversarial loss is written like this:
image
Adverasarial loss is caculated between target label and source prediction.
right?

Hi, thanks for your question.
Actually, as you can see in the code, model_D has a 2K channels' output. So the tgt_D_pred includes P(d=0, c=k|f_j) and P(d=1, c=k|f_j) for all class k in the formula. We set a_{jk} for all P(d=1, c=k|f_j) as 0, which makes our implementation consistent with the formula in the paper.