Exploration-Lab / IFG-Pretrained-LM

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

IFG-Pretrained-LM

Pre-trained Language Models as Prior Knowledge for Playing Text-based Games [arxiv]

Set up:

conda create -n {yourenvname} python=3.7 anaconda
pip install torch==1.4 transformers==2.5.1 jericho fasttext wandb importlib_metadata
python -m spacy download en_core_web_sm
git clone repo

Get the trained distillBERT here. OR Run LM training from dbert_train folder.

Run RL training:

conda activate {yourenvname}
cd repo/dbert_drrn
python train.py --rom_path ../games/{gamefilename} --lm_path {lm_path} --output_dir ./logs

Get more game engines here.

Acknowledgements

The code borrows from CALM and huggingface.

Citation

If you want to use our work in your research, please cite:

@misc{singh2021pretrained,
      title={Pre-trained Language Models as Prior Knowledge for Playing Text-based Games}, 
      author={Ishika Singh and Gargi Singh and Ashutosh Modi},
      year={2021},
      eprint={2107.08408},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

About


Languages

Language:Python 100.0%