LuoweiZhou / VLP

Vision-Language Pre-training for Image Captioning and Question Answering

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Invalid UniLM checkpoints

lostnighter opened this issue · comments

Hi Luowei,
thanks for releasing code ro the public.
I find the link ' UniLM checkpoints ' is invalid now, can you release a accessible link again?
I find you use BERT-base as Transformer backbone as stated in your paper, and the weights in your BERT model are initialized from UniLM. However, in UniLM paper and their github they only explore BERT-large. So i don't know UniLM checkpoint you use is BERT-base or BERT-large?

@lostnighter The BERT-base UniLM checkpoint has not been officially released yet, so we do not have the permission to distribute the model. Please contact the UniLM authors for more details.

As an alternative, you can initialize the VLP pre-training model with the original BERT checkpoint, which gives similar results (at least on CC pre-training) as shown in Tab. 5.