adamlin120 / lm

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Installation

Develop

pip install -r requirements.txt

Deploy

docker image build -t transition:latest .

Data

bash prepare_data.sh

The processed file is stores at data/<dataset_name>.processed.json

Json format:

{
  "train": {
    "train_0": [ "Other's utterance", "Our turn", ... ],
    ...
  },
  "valid": {...},
  "test": {...}
}

Training

The training script can be used in single GPU or multi GPU settings:

python trainer.py -h  # To see options for training

Interactive Demo

CHECKPOINT could be local file dir to save_pretrained in transformers output or <user/model_name> on HuggingFace Model Hub

eg. ytlin/verr5re0 or ytlin/1pm2c7qw_6

python demo.py <checkpoint>

Generate Sample

python generate_response.py <checkpoint> <input dialogues path> <output path>
# eg
python generate_response.py ytlin/1klqb7u9_35 ./data/human_eval/chit_to_task_cue.txt ./data/human_eval/chit_to_task_cue.txt.gen

Docker

docker run --rm -it -e CHECKPOINT=<checkpoint> transition:latest

About

License:MIT License


Languages

Language:Python 93.7%Language:Shell 3.4%Language:Dockerfile 2.9%