hzhang57 / mega

Sequence modeling with Mega.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Mega: Moving Average Equipped Gated Attention

This is the PyTorch implementation of the Mega paper. This folder is based on the fairseq package v0.9.0.

Mega: Moving Average Equipped Gated Attention

Xuezhe Ma*, Chunting Zhou*, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, Luke Zettlemoyer

Setup

This repository requires Python 3.8+ and Pytorch 1.11+.

# Install from this repo
pip install -e .

For faster training, install NVIDIA's apex library following fairseq.

Examples

Models Checkpoints

Task Description # params Download
LRA Mega on LRA tasks -- mega.lra.zip
WMT'14 (En-De) Mega-base on WMT'14 En-De 67M meta.wmt14ende.base.zip
WMT'14 (De-En) Mega-base on WMT'14 De-En 67M meta.wmt14deen.base.zip
SC-Raw Mega-base/big on raw Speech Commands 300k meta.sc.zip
WikiText-103 Language modeling on WikiText-103 252M meta.wiki103.zip
Enwiki8 Language modeling on Enwiki8 39M meta.enwiki8.zip

Experiments

Code Overview

  1. Mega layer is implemented in fairseq/modules/mega_layer.py.
  2. Mega encoder (LRA) is implemented in fairseq/models/lra/mega_lra_encoder.py.
  3. Mega decoder (LM) is implemented in fairseq/models/mega_lm.py.
  4. Mega encoder-decoder (NMT) is implemented in fairseq/models/mega.py.

Tips

  1. Models are trained with float32, as at the time of development, fft and rfft with fp16 were not supported by pytorch 1.11.0. We'll try to use fp16 with the new version of pytorch.
  2. If you'd like to apply Mega to your task/data, besides architecture size, hyperparameters that are worth considering and tuning include learning rate (lr) and weight decay (wd). We find that tuning wd is a more effective regularization to Mega (in contrast to tuning dropout rates for Transformers). Suggested wd values include 0.01, 0.05, 0.1 (larger models typically need larger wd, please refer to appendix of our paper for hyperparameters we have used). For lr scheduler, linear lr decay and cosine lr decay schedules are more effective than the inverse square root decay scheduler for Mega.

License

mega is under Attribution-NonCommercial 4.0 license. The license applies to model checkpoints as well.

Citation

@article{ma2022mega,
  title={Mega: Moving Average Equipped Gated Attention},
  author={Ma, Xuezhe and Zhou, Chunting and Kong, Xiang and He, Junxian and Gui, Liangke and Neubig, Graham and May, Jonathan and Zettlemoyer Luke},
  journal={arXiv preprint arXiv:2209.10655},
  year={2022}
}

About

Sequence modeling with Mega.

License:Other


Languages

Language:Python 97.8%Language:Cuda 0.8%Language:Cython 0.6%Language:C++ 0.6%Language:Lua 0.2%Language:Shell 0.0%