clovaai / donut

Official Implementation of OCR-free Document Understanding Transformer (Donut) and Synthetic Document Generator (SynthDoG), ECCV 2022

Home Page:https://arxiv.org/abs/2111.15664

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question about the special token map

RAY-RaY-R opened this issue · comments

Hello, just have a quick questions about the "special_tokens_map.json" file. After I fine-tuned rvlcdip task (claasification) on my own dataset, the additional_special_token key only shows one value:
{"additional_special_tokens": ["<s_rvlcdip>"]}

When I check the same file in rvlcdip-pretrained-official it has all the custom class names as well :
{"additional_special_tokens": ["</s_class>", "<advertisement/>", "<budget/>", "<email/>", "<file_folder/>", "<form/>", "<handwritten/>", "<invoice/>", "<letter/>", "<memo/>", "<news_article/>", "<presentation/>", "<questionnaire/>", "<resume/>", "<s_class>", "<s_iitcdip>", "<s_rvlcdip>", "<s_synthdog>", "<scientific_publication/>", "<scientific_report/>", "<specification/>"]}

The model shows great accuracy, but I'm just a bit concerned. Is this a problem? If I add those tokens manually the accuracy of the model drops a lot.

Hey, I did not develop the rvlcdip model but from working with donut for a bit I understood that the authors add classes as their own tokens so the model learns them from scratch and assigns a unique ID to them instead of using the wordpiece tokens to piece them together.

In a bit more detail, when generating an output the decoder uses the vocabulary to put one token after the other, like any other generative transformer. So let's say the model should predict "scientific_publication" as a class, it needs to check the vocabulary for tokens like for example: ["<", "scientficic", "_", "publication", "/>"] and piece them together. Each of these tokens has it's own ID and is part of the predicted output sequence. This works fine but the model needs to "forget and relearn" what these tokens mean during finetuning and it also needs to predict different amounts of tokens for each class. So what you could to is to add a new token ID for "<scientific_publication/>". Then the model can learn what this new token means and will just have to predict a single token ID to make a classification. You can do this by calling add_tokenson the tokenizer and resize_token_embeddings on the decoder before training, but for more information you should check out the example notebooks from NielsRogge.

Hope this helps and good luck with your experiments!