In this repository, we release the code for the paper:
Matching Tasks to Objectives: Fine-Tuning and Prompt-Tuning Strategies for Encoder-Decoder Pre-trained Language Models
If you've reached this page due to interest in our paper and are looking for the project resources, please feel free to reach out. We can expedite the availability and publication of the project materials as needed. Contact
For any inquiries or requests related to the project, please contact us at: pouramini at gmail