princeton-nlp / SimCSE

[EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question about used models

MLKoz opened this issue · comments

commented

Hello, I would like to know why you conducted experiments with BERT and RoBERTa instead of XLNet or DeBERTa? I can read that XLNet is better than BERT/RoBERTa in many NLP tasks and I think about testing XLNet using SimCSE, maybe do you know some disadvatanges? Thanks.

Hi,

Our method is model agnostic and should be able to adapt to any pre-trained models. We choose BERT/RoBERTa because they are more commonly used in the community. I also believe that RoBERTa performs better than XLNet on a lot of tasks.

Stale issue message