zhenwang9102 / Prompt-Tuning

Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Prompt Tuning

This is the pytorch implementation of The Power of Scale for Parameter-Efficient Prompt Tuning.

Currently, we support the following huggigface models:

  • GPT2LMModel

Usage

See example.ipynb for more details.

from model import GPT2PromptTuningLM

# number of prompt tokens
n_prompt_tokens = 20
# If True, soft prompt will be initialized from vocab 
# Otherwise, you can set `random_range` to initialize by randomization.
init_from_vocab = True
# random_range = 0.5

# Initialize GPT2LM with soft prompt
model = GPT2PromptTuningLM.from_pretrained(
    "gpt2",
    n_tokens=n_prompt_tokens,
    initialize_from_vocab=init_from_vocab
)

Reference

About

Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"

License:MIT License


Languages

Language:Jupyter Notebook 77.7%Language:Python 22.3%