qcwthu / Continual_Fewshot_Relation_Learning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question about experiment for BERT

dabao12 opened this issue · comments

Thanks for your excellent work about incremental few-shot relation learning. It is really interesting and insightful.
I tried to experiment with BERT, but the accuracy is far from the results in the paper.
Are you welling to open how to run CFRE based on BERT? Whether I can get this part of the code.
My email address is hanbj890@nenu.edu.cn. Hope to hear from you!

Thank you very much for your reply, by debug your code, I have a question, I find only fine-tune the 12th encoding layer of Bert and not fine-tune bert.pooler, but the paper mentioned " we only fine-tune the 12th encoding layer and the extra linear layer. "

Hi, sorry for the late reply. I just checked the code.
"unfreeze_layers = ['layer.11', 'bert.pooler.', 'out.']"
I think the final linear layer is included.