[NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning
Home Page:https://arxiv.org/abs/2111.04198
Geek Repo:Geek Repo
Github PK Tool:Github PK Tool
SamMohel opened this issue 2 years ago · comments
is there a small size for your model as i tried the cambridgeltl/tacl-bert-base-uncased but is very large with my laptop
cambridgeltl/tacl-bert-base-uncased