grammatical / neural-naacl2018

Neural models and instructions on how to reproduce our results for our neural grammatical error correction systems from M. Junczys-Dowmunt, R. Grundkiewicz, S. Guha, K. Heafield: Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task, NAACL 2018.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Approaching Neural GEC as a Low-Resource MT Task

This repository contains neural models and instructions on how to reproduce our results for our neural grammatical error correction systems from M. Junczys-Dowmunt, R. Grundkiewicz, S. Guha, K. Heafield: Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task, NAACL 2018.

Citation

@InProceedings{neural-naacl2018,
    author    = {Junczys-Dowmunt, Marcin  and  Grundkiewicz, Roman  and  Guha,
                 Shubha  and  Heafield, Kenneth},
    title     = {Approaching Neural Grammatical Error Correction as a
                 Low-Resource Machine Translation Task},
    booktitle = {Proceedings of the 2018 Conference of the North American
                 Chapter of the Association for Computational Linguistics:
                 Human Language Technologies, Volume 1 (Long Papers)},
    month     = {June},
    year      = {2018},
    address   = {New Orleans, Louisiana},
    publisher = {Association for Computational Linguistics},
    pages     = {595--606},
    url       = {http://www.aclweb.org/anthology/N18-1055}
}

Models

We have prepared the top neural GEC system described in the paper that is an ensemble of four transformer models and a neural language model. Each translation model is pretrained with a language model and trained using edit-weighted MLE objective on NUCLE and Lang-8 data.

The systems are created using training settings that are very similar to those described in the paper. Small performance differences occur mainly due to the use of a more recent version of the Marian toolkit that comes with new features. The most noticable change is replacing averaged model checkpoints by exponential smoothing. Differences in less significant training hyperparameters might also exist. Other settings, including data and data preprocessing, remain exactly the same as in the original paper.

Content

  • models - pretrained neural models, instructions on how to use them, and scripts to evaluate the system on the CoNLL and JFLEG data sets
  • outputs - corrected outputs and evaluation scores for the CoNLL and JFLEG data sets generated by the prepared GEC system
  • training - complete training pipeline reproducing the prepared neural models

In case of any questions, please open an issue or send me (Roman) an email.

About

Neural models and instructions on how to reproduce our results for our neural grammatical error correction systems from M. Junczys-Dowmunt, R. Grundkiewicz, S. Guha, K. Heafield: Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task, NAACL 2018.

License:MIT License


Languages

Language:Python 63.0%Language:Shell 16.5%Language:Perl 13.2%Language:Makefile 7.3%