Lechatelia / fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

instructions

install environments

git checkout main I have changed the master to the available location

pip install -U omegaconf==2.0.6 hydra-core==1.0.6

for other metrics

pip install scipy sklearn

prepare data

please refer to

fintune

training

sh scripts/slurm_glue_finetune.sh roberta_small  workdirs/slurm_roberta_small_bookswiki_train_8gpu_100k/bert_small_100k/checkpoints/checkpoint_last.pt   fintune_small_100k ALL srun

evaluation

sh scripts/evaluate_glue.sh two workdirs/slurm_glue_finetune/roberta_small_fintune_small_100k srun

wikipedia bookcorpus data pretrain

cd your data path

cd /nfs/zhujinguo/datasets/data/bert_pretrain_data/
mkdir bookswiki 
mv bc1g.doc wiki.doc ./bookswiki
cd bookswiki
cat bc1g.doc wiki.doc > bookswiki.doc
cat bookswiki.doc| wc -l 
 # for valid and test dataset 
head -n 1000 bookswiki.doc >  bookswiki-1000.doc
cat bookswiki-1000.doc| wc -l  
sh scripts/encode_gpt2_bpe_bookswiki.sh
sh scripts/preprocess_gpt2_dict_bookswiki.sh



MIT License Latest Release Build Status Documentation Status


Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks.

We provide reference implementations of various sequence modeling papers:

List of implemented papers

What's New:

Previous updates

Features:

We also provide pre-trained models for translation and language modeling with a convenient torch.hub interface:

en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model')
en2de.translate('Hello world', beam=5)
# 'Hallo Welt'

See the PyTorch Hub tutorials for translation and RoBERTa for more examples.

Requirements and Installation

  • PyTorch version >= 1.5.0
  • Python version >= 3.6
  • For training new models, you'll also need an NVIDIA GPU and NCCL
  • To install fairseq and develop locally:
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./

# on MacOS:
# CFLAGS="-stdlib=libc++" pip install --editable ./

# to install the latest stable release (0.10.x)
# pip install fairseq
  • For faster training install NVIDIA's apex library:
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./

# pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" \
#   --global-option="--deprecated_fused_adam" --global-option="--xentropy" \
#   --global-option="--fast_multihead_attn" ./ 
  • For large datasets install PyArrow: pip install pyarrow
  • If you use Docker make sure to increase the shared memory size either with --ipc=host or --shm-size as command line options to nvidia-docker run .

Getting Started

The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.

Pre-trained models and examples

We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, as well as example training and evaluation commands.

We also have more detailed READMEs to reproduce results from specific papers:

Join the fairseq community

License

fairseq(-py) is MIT-licensed. The license applies to the pre-trained models as well.

Citation

Please cite as:

@inproceedings{ott2019fairseq,
  title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
  author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
  booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
  year = {2019},
}

About

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

License:MIT License


Languages

Language:Python 94.8%Language:Shell 2.6%Language:Cuda 1.3%Language:C++ 0.7%Language:Cython 0.4%Language:Lua 0.1%