VisionLearningGroup / SSDA_MME

Semi-supervised Domain Adaptation via Minimax Entropy

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Reproducing the results for Resnet-34 in real-to-sketch adaptation

andriiD opened this issue · comments

Hello, thank you for sharing the code.

Could you publish the commands for training the model to reproduce the results of table1 in the paper for the 3-shot settings? I ran the training with "labeled_source_images_real.txt " as source annotations, "labeled_target_images_sketch_3.txt" as labeled target, "unlabeled_target_images_sketch_3.txt" for domain adaptation. The model performance was tested on "unlabeled_target_images_sketch_3.txt".

For resnet-34 I obtained the following results :
ACC All 62.775959 ACC Averaged over Classes 63.848788
It is stated that the model performance is 72.2% in the table 1 of the paper.

Maybe I made a mistake in the training setup. Thank you

Hi, for Real to sketch adaptation, the result is almost the same as we reported in the table 1 of the paper.

I found the mistake. I looked at the results in the real-to-clipart column. Thanks

I use this code to evaluate the trained model.
CUDA_VISIBLE_DEVICES=1 python eval.py --method MME --dataset multi --source real --target sketch --net alexnet --step 1000
QQ截图20191127152021
How to obtain the accuracy?