Missing References in the Prompt Tuning section.
MLWatcher opened this issue · comments
MLWatcher commented
There is a large body of prior and concurrent work in prompt tuning that has been left out of this paper. For example:
- Learning How to Ask, one of the first to learn a continuous version of a prompt https://arxiv.org/abs/2104.06599
- Prefix-Tuning https://arxiv.org/abs/2101.00190
- Prompt Tuning, where you seem to have gotten the term model-tuning from https://arxiv.org/abs/2104.08691
- GPT Understands Too, which jointly learns a prompt and updates the model like the second step of your two step pipeline https://arxiv.org/abs/2103.10385
- WARP https://arxiv.org/abs/2101.00121
Zhengyan Zhang commented
Hi,
Thanks for your suggestion. We have added these references to our latest version.