zjunlp / knowledge-rumination

[EMNLP 2023] Knowledge Rumination for Pre-trained Language Models

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Knowledge-Rumination

conda create -n rumination python=3.8

First install the transformers:

cd transformers-4.3.0
pip install --editable .
cd ..
pip install -r transformers

Initialize the model

    python initialization.py
export COQA_DIR=./data/coqa
python cli.py \
--task_name coqa \
--model_type roberta \
--model_name_or_path roberta-base \
--do_train \
--do_eval \
--data_dir $COQA_DIR \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--max_seq_length 80 \
--output_dir ./outputs/models_roberta/roberta_base \
--per_gpu_eval_batch_size=16 \
--per_device_train_batch_size=16 \
--gradient_accumulation_steps 2 \
--overwrite_output

About

[EMNLP 2023] Knowledge Rumination for Pre-trained Language Models

License:MIT License


Languages

Language:Python 99.6%Language:Shell 0.2%Language:Jupyter Notebook 0.1%Language:Makefile 0.0%