princeton-nlp / LM-BFF

[ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Can this method improve data-sufficient fine-tuning?

guotong1988 opened this issue · comments

I am new to LM-BFF.
Thank you very much!

Hi guotong, can you specify your question? What do you mean by improving "data-efficient" fine-tuning?

My fault. I mean the fine-tune data count is large enough.

I see. We haven't tried fine-tuning this on the full data, but if you see Figure 3, it shows the trends for LM-BFF and fine-tuning with more training data, and they tend to converge in the end.