goo2go / neural-naacl2018

This repository contains neural models and instructions on how to reproduce our results for our neural grammatical error correction systems from M. Junczys-Dowmunt, R. Grundkiewicz, S. Guha, K. Heafield: Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task, NAACL 2018.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Approaching Neural GEC as a Low-Resource MT Task

This repository contains neural models and instructions on how to reproduce our results for our neural grammatical error correction systems from M. Junczys-Dowmunt, R. Grundkiewicz, S. Guha, K. Heafield: Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task, NAACL 2018.

Citation

@InProceedings{neural-naacl2018,
    author    = {Junczys-Dowmunt, Marcin  and  Grundkiewicz, Roman  and  Guha,
                 Shubha  and  Heafield, Kenneth},
    title     = {Approaching Neural Grammatical Error Correction as a
                 Low-Resource Machine Translation Task},
    booktitle = {Proceedings of the 2018 Conference of the North American
                 Chapter of the Association for Computational Linguistics:
                 Human Language Technologies, Volume 1 (Long Papers)},
    month     = {June},
    year      = {2018},
    address   = {New Orleans, Louisiana},
    publisher = {Association for Computational Linguistics},
    pages     = {595--606},
    url       = {http://www.aclweb.org/anthology/N18-1055}
}

Models

We prepared the top neural GEC system described in the paper that is an ensemble of four transformer models and a neural language model. Each translation model is pretrained with a language model and trained using edit-weighted MLE objective on NUCLE and Lang-8 data.

The systems are created using training settings that are very similar to those described in the paper. Small differences come mainly from the fact that we use the recent version of Marian to train the models. The main difference is the use of exponential smoothing instead of averaging model checkpoints. Differences in less significant training hyperparameters might also exist as I recreated configuration files from scratch. Other settings, inluding data and data preprocessing, remain exactly the same as in the original paper.

Content

  • models - prepared neural models, instructions on how to use them, and scripts to evaluate the system on CoNLL and JFLEG data sets
  • outputs - corrected outputs and evaluation scores for CoNLL and JFLEG data sets generated by the prepared GEC system
  • training - complete training pipeline that allows to reproduce the prepared neural models

In case of any questions, please open an issue or send me (Roman) an email.

About

This repository contains neural models and instructions on how to reproduce our results for our neural grammatical error correction systems from M. Junczys-Dowmunt, R. Grundkiewicz, S. Guha, K. Heafield: Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task, NAACL 2018.


Languages

Language:Python 63.0%Language:Shell 16.5%Language:Perl 13.2%Language:Makefile 7.3%