microsoft / i-Code

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Finetuning on Due-Benchmark

swathikirans opened this issue · comments

Hi,

I have been trying to finetune the model on due-benchmark using the provided script. However, the performance is quite low compared to the reported numbers. For example, DocVQA results in an ANLS score of 75 instead of the reported 84. I have two main queries.

  1. The provided checkpoint is missing one parameter: special_vis_token. For now this parameter is initialized randomly. I am not sure if this has a significant impact on the final score.
  2. As per the paper, the input is prepended with a task specific prompt. However, it seems this is not done for the due-benchmark tasks. Could this be the reason for the low performance?

I think the main thing to focus on is the prompt. Finetuning from different prompts affect the performance. Properly adding the 2D and 1D position is also important. Anything missing could result in a performance drop.

Thank you for the quick reply. So is it not possible to get the results reported in the paper by running the published code without any changes? What is the exact prompt used for DocVQA? The prompt used in RVL-CDIP code is different than what is mentioned in the paper. So I am not sure if prompt used for training DocVQA is also the same from the paper. It would be really helpful if you can provide all the details that are required to obtain the results reported in the paper.

The prompt should be the same as in the paper with "question answering on DocVQA. [question]. [context]".
I am mostly curious about the position embedding/bias addition to the model, which matters a lot if not set up properly. Could you provide other information. How many epochs did you run? If it still doesn't work, let me try to push the DocVQA finetuning code.

I used the same prompt as above. The modifications I made are after here as follows:

  1. prepend the input_ids (item_dict["input_ids"]) with prompt_token_ids
  2. prepend the attention mask (item_dict["attention_mask"]) with N True values where N is the length of the prompt_token_ids
  3. prepend the bounding boxes (item_dict["seg_data"]["tokens"]["bboxes"]) with an Nx4 array of zero values where N is the length of the prompt_token_ids

I used this script for finetuning. The training always stop around 4 epochs due to early stopping criteria.

I was using the Unimodal 224. However, from the paper, the performance of the various models vary only between [+2, -2] at the maximum. Anyway, I will try the other models as well. Thanks for the input.

Hi, I tried the other two variants (512 and dual) as well. These models also did not result in any significant improvement. So far the best score obtained on DocVQA task in due-benchmark is 76.29 with the 512 resolution model.

Could you please provide the following details?

  1. Which model is used for preprocessing the data (generating memmaps)? Is it the t5-large provided by due-benchmark or the UDOP pretrained model?
  2. Which transformer version is used to train the model?
  1. T5-base is used for preprocessing the data. The t5-large is the huggingface transformers.
  2. I've tested with 4.20 and 4.30.

Btw, which checkpoint you used for evaluation, the one with lowest validation loss or the last checkpoint. I am asking because usually loss is not a good reflector of language score and we usually use the last checkpoint.

  1. I used the T5-Large provided by due-benchmark for preprocessing the data.
  2. The recommended transformers version 4.30.0 was giving loss does not have a grad function error. So I had to replace the AdamW optimizer from transformers with the pytorch one. I also tried with 4.20 and AdamW from transformers. However there was no change in the performance.

I used the last checkpoint (last.ckpt) to get the test predictions. Not sure what exactly is going wrong.

what are the resource requirements in order to finetune on DocVQA task?