h-guo18 / fewshotgen

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

fewshot generation

This repository is for few-shot text generation research;

Task Description

Given a handful of training samples, we explore to maximize the adaption performance for CausalLM task.
Specificlly, we use GPT-2 as backbone model and predict next 40 tokens given the previous 200 tokens. We also take the following practical requirements into consideration:

  1. Parameter-efficiency
  2. Generalization ability to any new domain
  3. Stability in change of shotnums

Usage

prepare data

Unzip data.zip to repository main directory.
We provide corpus from 5 different domains: gongwen, international news, peotry, sports news, and short stories.

enviroments

pip install -r requirements.txt

train model

python train.py --shotnum $shotnum --domain $domain --adaption_type $adaption_type
And the model will be saved to save/ directory by default.

test model

python test.py --shotnum $shotnum --domain $domain --adaption_type $adaption_type
And the prediction file will be saved to pred/ directory by default.

command arguments

$shotnum: number of examples, possible values: {0,4,8,16,32,64,128};
$domain: domain of adaption, {'gongwen', 'international', 'peotry', 'sports', 'story'};
$adaption_type: 'finetune', 'adapter', 'lora', or 'retrieval'; indicate methods of adpation to target domain;

  • 'finetune': Traditional full-parameter adaption;
  • 'adapter': Parameter-efficient tuning by adding parameter blocks, paper: https://arxiv.org/pdf/1902.00751.pdf;
  • 'lora': Parameter-efficient tuning by adding low-rank matrics, paper: https://arxiv.org/pdf/2106.09685.pdf;
  • 'retrieval': Input encodings of retrieved passages as reference. Training with this settings will add cross-attention blocks and freeze other parameters. The result should be a domain-agnostic LM with ability to consult given passages.

Results

BLEU

bleu

BERTScore

bertscore

Rouge-2

rouge-2

About


Languages

Language:Python 98.6%Language:Shell 1.4%