funnyzhou / REFERS

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Questions about Results of Fine-tuning

beomheepark opened this issue · comments

I apologize for the inconvenience, but I have some questions about reproducing the results of your REFER paper.

When I ran command lines using a pre-trained weight file, different results from Table 1 in the paper were observed.
For example, in the case of Shenzhen Tuberculosis, 0.93 was obtained on the test set, but it is written as 0.98 in the paper. (-0.05 performance gap)
Could you tell me where the difference came from?

(I simply executed the following as written in README.md python train.py --name caption_100 --stage train --model_type ViT-B_16 --num_classes 1 --pretrained_dir "../checkpoint/refers_checkpoint.pth" --output_dir "./output/" --data_volume '100' --num_steps 100 --eval_batch_size 512 --img_size 224 --learning_rate 3e-2 --warmup_steps 5 --fp16 --fp16_opt_level O2 --train_batch_size 128 There were also performance gaps in all other fine-tuning datasets.)

Hi, I also got this gap in fine-tuning. The number is slightly different from your result(0.05), in some cases it's bigger, in some cases smaller. Did you find the performance show lower on all test cases?

Moreover, I am trying to reproduce the pre-train stage, while I did not find the recurrent aggregartion function in ViT backbone and other source code. Do you have the same issue?