indra622 / FBK-fairseq

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FBK-fairseq

This repository contains the code for the following published papers:

Speechformer

This repository contains the code for the preprocessing, training and evaluation steps of the PlainConvattention and Speechformer architectures as well as the pretrained models.

For further details, please refer to the paper: Speechformer: Reducing Information Loss in Direct Speech Translation.

Setup

Clone this repository and install it as explained in the original Fairseq(-py). For the experiments we used MuST-C (en-de, en-es, en-nl), make sure to download the corpus.

Preprocessing

Before starting the training, the data has to be preprocessed. After downloading the MuST-C dataset into the MUSTC_ROOT directory, create your working directory DATA_ROOT and link there the data for the target language LANG to be preprocessed, with the command

mkdir $DATA_ROOT
for t in train dev tst-COMMON; do
  ln -s ${MUSTC_ROOT}en-$LANG/data/$t/txt/$t.* $DATA_ROOT
done
mkdir $DATA_ROOT
for t in train dev tst-COMMON; do
  ln -s ${MUSTC_ROOT}en-$LANG/data/$t/wav/* $DATA_ROOT
done

Once your DATA_ROOT is ready, run the following command to preprocess the data, where FAIRSEQ_DIR is the path to this Fairseq installation and MUSTC_SAVE_DIR is the path where you want to save the preprocessed files (it can be equal to DATA_ROOT):

python ${FAIRSEQ_DIR}/examples/speech_to_text/preprocess_generic.py \
  --data-root ${DATA_ROOT} --wav-dir ${MUSTC_ROOT}/wav \
  --save-dir ${MUSTC_SAVE_DIR} \
  --task st --src-lang en --tgt-lang ${LANG} \
  --splits train dev tst-COMMON \
  --vocab-type unigram \
  --vocab-size 8000 \
  --src-normalize 

⭐️Pay attention! ➜ To replicate the experiments of the Speechformer, the source vocabulary size has to be 5000. You have to run this script again changing --vocab-size 8000 to --vocab-size 5000, with the option --no-filterbank-extraction to avoid the re-computation of the mel-filterbank features.

Training

In the following, there are the scripts for training both PlainConvattention and Speechformer architectures.

⭐️Please note that the training phase of PlainConvattention (which corresponds to the encoder pretraining of the Speechformer) is mandatory to successfully train the Speechformer architecture.

PlainConvattention

To start the training of the PlainConvattention architecture, run the following command, where ST_SAVE_DIR is the directory in which you want to save the trained model and CONFIG_YAML_NAME is the name of the .yaml file:

fairseq-train ${MUSTC_SAVE_DIR} \
        --train-subset train_st_src --valid-subset dev_st_src \
        --save-dir ${ST_SAVE_DIR} \
        --num-workers 8 --max-update 100000 \
        --max-tokens 10000 \
        --user-dir examples/speech_to_text \
        --task speech_to_text_ctc --config-yaml ${CONFIG_YAML_NAME}.yaml \
        --criterion ctc_multi_loss --underlying-criterion label_smoothed_cross_entropy \
        --label-smoothing 0.1 --best-checkpoint-metric loss \
        --arch speechformer_m \
        --ctc-encoder-layer 8 \
        --compressed 4 --compress-kernel-size 8 --stride 1 \
        --shared-layer-kv-compressed --shared-kv-compressed \
        --CNN-first-layer \
        --optimizer adam --lr 1e-3 --lr-scheduler inverse_sqrt \
        --warmup-updates 10000 \
        --clip-norm 10.0 \
        --seed 1 --update-freq 16 \
        --skip-invalid-size-inputs-valid-test 

The script above is intended to be run on 2 V100 GPUs with 32GB of RAM. In case you have more GPUs, you should divide the --update-freq parameter accordingly, e.g. if you have 4 GPUs use 8 as --update-freq. In case your GPUs have lower RAM, you can halve the --max-tokens value and duplicate --update-freq.

Speechformer

To start the training of the Speechformer arcitecture, the first step is to select only the first part of the PlainConvattention encoder (until the layer to which the CTC is applied) by running this command:

python ${FAIRSEQ_DIR}/examples/speech_to_text/strip_after_ctc.py \
  --user-dir examples/speech_to_text \
  --model-path ${CHECKPOINT_PATH} \
  --new-model-path ${STRIPPED_CHECKPOINT_PATH} 

where CHECKPOINT_PATH is the absolute path to your PlainConvattention checkpoint .pt and STRIPPED_CHECKPOINT_PATH is the absolute path to the new checkpoint .pt generated containing only the first part of the encoder. Also --num-encoder-layers and --ctc-encoder-layer have to be specified if different from our default architecture (with values 12 and 8 respectively).

⭐️Please note that, to replicate our paper, the checkpoint used are the average 7, as explained in the Generate section.

Then, to start the training, run the following command:

fairseq-train ${MUSTC_SAVE_DIR} \
        --train-subset train_st_src --valid-subset dev_st_src \
        --save-dir ${ST_SAVE_DIR} \
        --num-workers 8 --max-update 100000 \
        --max-tokens 10000 \
        --user-dir examples/speech_to_text \
        --task speech_to_text_ctc --config-yaml ${CONFIG_YAML_NAME}.yaml  \
        --criterion ctc_multi_loss --underlying-criterion label_smoothed_cross_entropy \
        --label-smoothing 0.1 --best-checkpoint-metric loss \
        --arch speechformer_m \
        --load-pretrained-encoder-from ${STRIPPED_CHECKPOINT_PATH} \
        --allow-partial-encoder-loading \
        --transformer-after-compression \
        --ctc-encoder-layer 8 \
        --ctc-compress-strategy avg \
        --compressed 4 --compress-kernel-size 8 --stride 1 \
        --shared-layer-kv-compressed --shared-kv-compressed \
        --CNN-first-layer \
        --optimizer adam --lr 1e-3 --lr-scheduler inverse_sqrt \
        --warmup-updates 10000 \
        --clip-norm 10.0 \
        --seed 1 --update-freq 16 \
        --skip-invalid-size-inputs-valid-test

and you can use the parameter --patience to early stopping the training once the loss does not improve for a certain number of epochs (15 in our case).

Generate

For the generate phase, you first have to average 7 checkpoints, among which the middle one is the best checkpoint on the validation set (according to the loss) obtained during training. Run the following command and set BEST_CKP+3 as the number of your best checkpoint plus 3 to make the average 7 and AVERAGE_CHECKPOINT_NAME as the name that you want to give to the average checkpoint:

python ${FAIRSEQ_DIR}/scripts/average_checkpoints.py \
  --inputs ${ST_SAVE_DIR} \
  --output "${ST_SAVE_DIR}/${AVERAGE_CHECKPOINT_NAME}.pt" \
  --num-epoch-checkpoints 7 \
  --checkpoint-upper-bound ${BEST_CKP+3}

Then, run the following command to perform the generate:

fariseq-generate ${MUSTC_SAVE_DIR} \
  --config-yaml ${CONFIG_YAML_NAME}.yaml \
  --gen-subset tst-COMMON_st_src \
  --task speech_to_text_ctc \
  --criterion ctc_multi_loss --underlying-criterion label_smoothed_cross_entropy \
  --user-dir examples/speech_to_text \
  --path ${ST_SAVE_DIR}/${AVERAGE_CHECKPOINT_NAME}.pt \
  --max-tokens 25000 --beam 5 --scoring sacrebleu --no-repeat-ngram-size 5 \
  --results-path ${ST_SAVE_DIR}

Note that we set --max-tokens 25000 since we used a K80 GPU with 12 GB of RAM to generate the output.

⭐️PRETRAINED MODELS

Download our vocabulary and yaml files if you want to use our pretrained models:

Click on the corresponding language pair to download the model:

Model --arch Params en-de en-nl en-es
Baseline s2t_transformer_m_fbk 77M 22.87 27.21 28.09
Baseline+compress. s2t_transformer_m_fbk 77M 22.89 26.93 28.09
PlainConvattn speechformer_m 79M 23.29 27.18 28.01
Speechformer speechformer_m 79M 23.84 27.85 28.56

Remember that the results in our paper are the average BLEU score of 3 runs, here you can download the checkpoint a of a single run.

Citation

Please cite as:

@inproceedings{papi2021speechformer,
  title = {{Speechformer: Reducing Information Loss in Direct Speech Translation}},
  author = {Papi, Sara and Gaido, Marco and Negri, Matteo and Turchi, Marco},
  booktitle = {Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
  year = {2021},
}

Below, there is the original Fairseq README file.




MIT License Latest Release Build Status Documentation Status


Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks.

We provide reference implementations of various sequence modeling papers:

List of implemented papers

What's New:

Previous updates

Features:

We also provide pre-trained models for translation and language modeling with a convenient torch.hub interface:

en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model')
en2de.translate('Hello world', beam=5)
# 'Hallo Welt'

See the PyTorch Hub tutorials for translation and RoBERTa for more examples.

Requirements and Installation

  • PyTorch version >= 1.5.0
  • Python version >= 3.6
  • For training new models, you'll also need an NVIDIA GPU and NCCL
  • To install fairseq and develop locally:
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./

# on MacOS:
# CFLAGS="-stdlib=libc++" pip install --editable ./

# to install the latest stable release (0.10.0)
# pip install fairseq==0.10.0
  • For faster training install NVIDIA's apex library:
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" \
  --global-option="--deprecated_fused_adam" --global-option="--xentropy" \
  --global-option="--fast_multihead_attn" ./
  • For large datasets install PyArrow: pip install pyarrow
  • If you use Docker make sure to increase the shared memory size either with --ipc=host or --shm-size as command line options to nvidia-docker run .

Getting Started

The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.

Pre-trained models and examples

We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, as well as example training and evaluation commands.

We also have more detailed READMEs to reproduce results from specific papers:

Join the fairseq community

License

fairseq(-py) is MIT-licensed. The license applies to the pre-trained models as well.

Citation

Please cite as:

@inproceedings{ott2019fairseq,
  title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
  author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
  booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
  year = {2019},
}

About

License:MIT License


Languages

Language:Python 97.4%Language:Cuda 1.4%Language:C++ 0.6%Language:Cython 0.4%Language:Lua 0.2%Language:Shell 0.0%