lucidrains / distilled-retriever-pytorch

Implementation of the retriever distillation procedure as outlined in the paper "Distilling Knowledge from Reader to Retriever"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Distilling Knowledge from Reader to Retriever

Implementation of the retriever distillation procedure as outlined in the paper Distilling Knowledge from Reader to Retriever in Pytorch. They propose to train the retriever using the cross attention scores as pseudo-labels. SOTA on QA.

Update: The BM25 gains actually do not look as impressive as the BERT gains. Also, it seems like distilling with BERT as the starting point never gets to the same level as BM25.

I am thinking whether it makes more sense to modify Marge (https://github.com/lucidrains/marge-pytorch) so one minimizes a loss between an extra prediction head on top of the retriever to the cross-attention scores, during training.

Citations

@misc{izacard2020distilling,
    title={Distilling Knowledge from Reader to Retriever for Question Answering}, 
    author={Gautier Izacard and Edouard Grave},
    year={2020},
    eprint={2012.04584},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

About

Implementation of the retriever distillation procedure as outlined in the paper "Distilling Knowledge from Reader to Retriever"

License:MIT License