sangmichaelxie / pretraining_analysis

Code for the NeurIPS 2021 paper "Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning"

Home Page:https://arxiv.org/abs/2106.09226

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

sangmichaelxie/pretraining_analysis Issues

No issues in this repository yet.