dandelin / ViLT

Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Flickr30k Finetune results does not match the provided checkpoint

JACKHAHA363 opened this issue · comments

Hi authors,

I take the provided pretrained 200k checkpoint and did the finetuning of flickr30k. The IR and TR scores after are 64.5 and 81.7. The TR score lower than the one in the paper. My finetuning command is

$PYTHONBIN run.py with data_root=vilt_dataset/ \
        num_gpus=8 num_nodes=1 task_finetune_irtr_f30k \
        per_gpu_batchsize=4 load_path="weights/vilt_200k.ckpt" \
        exp_name="f30k/finetune_official" 

Screen Shot 2021-06-15 at 12 53 24 PM

I also test the given vilt_irtr_f30k.ckpt and the results is good, with IR=65.3, TR=83.5. Can I ask what is the process of getting vilt_irtr_f30k.ckpt?

@JACKHAHA363

The fine-tuning results can be unstable due to augmentations. Also, we have only trained the IR/TR fine-tuning models for a single time.
You may increase the training epochs (greater than 10 epochs, maybe 20 epochs?) to get more stable and better results.

I tried longer epochs but that end up overfitting with increasing val loss. Would you mind providing the checkpoint for 100k steps also?

Are you able to solve this issue? @JACKHAHA363 I have similar issues on both flicker and coco retrieval.

Hi, bro.
I found ir/tr evaluation result on flickr is still unstable even using official finetuned checkpoint. Sometimes I got 63.94(ir)/83.6(tr), sometimes it changed to 64.3(ir)/83.7(tr). How do you think it? @dandelin @JACKHAHA363

Hi @byougert

Oops, you got the mail. I deleted the comment right after posted it as I noticed I put shuffle=False in DistributedSampler(image_dset, shuffle=False).

Though after quick investigation, I found the true reason.
It was the precision=16, set in https://github.com/dandelin/ViLT/blob/master/run.py#L51.
After setting precision=32 during evaluation I was able to get stable result.

I guess the score from rank_output is very cluttered so they need larger precision.
Thanks for the report and I will revise the EVAL.md. :)

Hi, bro.
Yes, i received your message in my mail but couldn't find the reply in github. hhhh....
Thanks for your reply and nice work.

Hi, @dandelin
I'm sorry to say that the result seems still puzzled. Last night, when I changed precision to 32 during evaluation, two similar but NOT SAME results appeared, which showed one was 0.6480(ir)/0.8370(tr) but the other was 0.6460(ir)/0.8370(tr).
Acatlly, seed is exactly fixed to 0. I have no idea what causes the differece. Y_Y