Raykoooo / IAST

IAST: Instance Adaptive Self-training for Unsupervised Domain Adaptation (ECCV 2020)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The warmup_at stage can't reproduce

parquets opened this issue · comments

hi, I meet some problem in warmup_at stage (the second stage). The miou of the model didn't increase compare to the source only model. The value of miou decreased and increased once and once and the best model is 32.63, no better than source only result. I can't find what is wrong. In addition, I find the bn layers are not frozen correctly.

Can you tell me the devices for running code, 1 Tesla V100-32GB or 2 Tesla T4?

The latest config "syn2cityscapes_t4" is used for running code on 2 Tesla T4. For 1 2080ti, I suggest you to change commit to 2eadc081c810b6780fd046d17401083a816e64f5 for runing code. We will check and rerun it on 1 Tesla T4 later. If you have any question, please letter us to know.

We test without data augmentation, please see:

val = DATASET[cfg.DATASET.VAL.TYPE](val_anns, val_image_dir)
def __init__(self, anns, image_dir, scale=1, resize_size=None, center_crop=1, use_aug=False, num_classes=19, pseudo_dir=None):

And your phenomenon is similar with us, for runing code on 2 Tesla T4 with the latest config "syn2cityscapes_t4", the best mIoU(16 classes) is 0.3866 for source only and 0.3884 for warmup.

I have tried to adaptive the code into the deeplabv3plus model and I think the config file for synthia2cityscapes need to be modified. I check the codes of IAST and AdaptSegNet, and do the adv training at the begining of the training, modify the discriminator weight to 0.01. And finally, I got 34miou for 19 classes. The additional file is the training log. I think for the synthia dataset, more parameters adjustment should be done.

Sorry for not replying to you in time.
Please note that there are only 16 or 13 categories in SYNTHIA-to-Cityscapes, while the mIoU in the log is the mean of 19 categories, you should recalculate it. And the reported results of SYNTHIA-to-Cityscapes are without careful parameter adjustment, it may not be the best.

Sorry to trouble you again. Thank you for your excellent work, I have adapted the IAST into other model. But I still have one problem. I try to adv train the model and get similar miou with you. However, I download the warmup model your provide and test it, I find the model perform more balance than my adv train result. I mean some hard class like "train" perform much better than mine adv train result. I got the "train" class miou no more than 10. Do you have any trick while warmup?

Actually, we can not explain why the performance of train is so good during adversarial training. We apply the augmentation with HorizontalFlip and RandomSizedCrop in all experiments, which is different from the augmentation used in other papers, we guess that this will affect the final results. This may be helpful for you, if you have found something new, please let us know.