JaylenLau / OpenNMT-tf

Neural machine translation and sequence learning using TensorFlow

Home Page:http://opennmt.net/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Build Status PyPI version Documentation Gitter

OpenNMT-tf

OpenNMT-tf is a general purpose sequence learning toolkit using TensorFlow 2.0. While neural machine translation is the main target task, it has been designed to more generally support:

  • sequence to sequence mapping
  • sequence tagging
  • sequence classification
  • language modeling

The project is production-oriented and comes with backward compatibility guarantees.

Key features

Modular model architecture

Models are described with code to allow training custom architectures. For example, the following instance defines a sequence to sequence model with 2 concatenated input features, a self-attentional encoder, and an attentional RNN decoder sharing its input and output embeddings:

opennmt.models.SequenceToSequence(
    source_inputter=opennmt.inputters.ParallelInputter(
        [opennmt.inputters.WordEmbedder(embedding_size=256),
         opennmt.inputters.WordEmbedder(embedding_size=256)],
        reducer=opennmt.layers.ConcatReducer(axis=-1)),
    target_inputter=opennmt.inputters.WordEmbedder(embedding_size=512),
    encoder=opennmt.encoders.SelfAttentionEncoder(num_layers=6),
    decoder=opennmt.decoders.AttentionalRNNDecoder(
        num_layers=4,
        num_units=512,
        attention_mechanism_class=tfa.seq2seq.LuongAttention),
    share_embeddings=opennmt.models.EmbeddingsSharingLevel.TARGET)

The opennmt package exposes other building blocks that can be used to design:

Standard models such as the Transformer are defined in a model catalog and can be used without additional configuration.

Find more information about model configuration in the documentation.

Full TensorFlow 2.0 integration

OpenNMT-tf is fully integrated in the TensorFlow 2.0 ecosystem:

Dynamic data pipeline

OpenNMT-tf does not require to compile the data before the training. Instead, it can directly read text files and preprocess the data when needed by the training. This allows on-the-fly tokenization and data augmentation by injecting random noise.

Model fine-tuning

OpenNMT-tf supports model fine-tuning workflows:

  • Model weights can be transferred to new word vocabularies, e.g. to inject domain terminology before fine-tuning on in-domain data
  • Contrastive learning to reduce word omission errors

Source-target alignment

Sequence to sequence models can be trained with guided alignment and alignment information are returned as part of the translation API.


OpenNMT-tf also implements most of the techniques commonly used to train and evaluate sequence models, such as:

  • automatic evaluation during the training
  • multiple decoding strategy: greedy search, beam search, random sampling
  • N-best rescoring
  • gradient accumulation
  • scheduled sampling
  • checkpoint averaging
  • ... and more!

See the documentation to learn how to use these features.

Usage

OpenNMT-tf requires:

  • Python >= 3.5

We recommend installing it with pip:

pip install --upgrade pip
pip install OpenNMT-tf

See the documentation for more information.

Command line

OpenNMT-tf comes with several command line utilities to prepare data, train, and evaluate models.

For all tasks involving a model execution, OpenNMT-tf uses a unique entrypoint: onmt-main. A typical OpenNMT-tf run consists of 3 elements:

  • the model type
  • the parameters described in a YAML file
  • the run type such as train, eval, infer, export, score, average_checkpoints, or update_vocab

that are passed to the main script:

onmt-main --model_type <model> --config <config_file.yml> --auto_config <run_type> <run_options>

For more information and examples on how to use OpenNMT-tf, please visit our documentation.

Library

OpenNMT-tf also exposes well-defined and stable APIs. Here is an example using the library to run beam search with a self-attentional decoder:

decoder = opennmt.decoders.SelfAttentionDecoder(num_layers=6)
decoder.initialize(vocab_size=32000)

initial_state = decoder.initial_state(
    memory=memory,
    memory_sequence_length=memory_sequence_length)

batch_size = tf.shape(memory)[0]
start_ids = tf.fill([batch_size], opennmt.START_OF_SENTENCE_ID)

decoding_result = decoder.dynamic_decode(
    target_embedding,
    start_ids=start_ids,
    initial_state=initial_state,
    decoding_strategy=opennmt.utils.BeamSearch(4))

For more advanced examples, some online resources are using OpenNMT-tf as a library:

  • The directory examples/library contains additional examples that use OpenNMT-tf as a library
  • nmt-wizard-docker uses the high-level onmt.Runner API to wrap OpenNMT-tf with a custom interface for training, translating, and serving

For a complete overview of the APIs, see the package documentation.

Additional resources

About

Neural machine translation and sequence learning using TensorFlow

http://opennmt.net/

License:MIT License


Languages

Language:Python 98.7%Language:Shell 1.3%