fungtion / DANN

pytorch implementation of Domain-Adversarial Training of Neural Networks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Why the achieved classification accuracy on MINIST-M is much higher than that of paper?

AlanLuSun opened this issue · comments

Hi, Shicheng. I downloaded your code yesterday and found that the DANN which trained both on labeled MINIST and unlabeled MINIST-M achieved accuracy as below on the test sets:

epoch: 99, accuracy of the mnist dataset: 0.987900
epoch: 99, accuracy of the mnist_m dataset: 0.907121

As we have known, the original paper got 0.7666 accuracy when transferring MINIST to MINIST-M, while here I got 0.907121. So do you know the reasons behind this? I check the neural network architecture is almost same as the paper.

Thanks for your help in advance.

I'm not Shicheng. The only difference between the original paper and mine is the optimizer, I'm not sure if there are other reasons would lead the mismatch.