ofirpress / sockeye

Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Sockeye

Documentation Status Build Status

This package contains the Sockeye project, a sequence-to-sequence framework for Neural Machine Translation based on Apache MXNet. It implements the well-known encoder-decoder architecture with attention.

If you are interested in collaborating or have any questions, please submit a pull request or issue. You can also send questions to sockeye-dev-at-amazon-dot-com.

Dependencies

Sockeye requires:

Installation

For AWS DeepLearning AMI users

AWS DeepLearning AMI users only need to run the following line to install sockeye:

> sudo pip3 install sockeye --no-deps

For other environments, you can choose between installing via pip or directly from source.

pip

CPU

> pip install sockeye

GPU

If you want to run sockeye on a GPU you need to make sure your version of Apache MXNet contains the GPU code. Depending on your version of CUDA you can do this by running the following for CUDA 8.0:

> wget https://raw.githubusercontent.com/awslabs/sockeye/master/requirements.gpu-cu80.txt
> pip install sockeye --no-deps -r requirements.gpu-cu80.txt
> rm requirements.gpu-cu80.txt

or the following for CUDA 7.5:

> wget https://raw.githubusercontent.com/awslabs/sockeye/master/requirements.gpu-cu75.txt
> pip install sockeye --no-deps -r requirements.gpu-cu75.txt
> rm requirements.gpu-cu75.txt

From Source

CPU

If you want to just use sockeye without extending it, simply install it via

> python setup.py install

after cloning the repository from git.

GPU

If you want to run sockeye on a GPU you need to make sure your version of Apache MXNet contains the GPU code. Depending on your version of CUDA you can do this by running the following for CUDA 8.0:

> python setup.py install -r requirements.gpu-cu80.txt

or the following for CUDA 7.5:

> python setup.py install -r requirements.gpu-cu75.txt

Optional dependencies

In order to track learning curves during training you can optionally install dmlc's tensorboard fork (pip install tensorboard). If you want to create alignment plots you will need to install matplotlib (pip install matplotlib).

In general you can install all optional dependencies from the Sockeye source folder using:

> pip install -e '.[optional]'

AWS DeepLearning AMI user need to use python3 command instead of the python

Running sockeye

After installation, command line tools such as sockeye-train, sockeye-translate, sockeye-average and sockeye-embeddings are available. Alternatively, if the sockeye directory is on your PYTHONPATH you can run the modules directly. For example sockeye-train can also be invoked as

> python -m sockeye.train <args>

AWS DeepLearning AMI user need to use python3 command instead of the python

First Steps

Train

In order to train your first Neural Machine Translation model you will need two sets of parallel files: one for training and one for validation. The latter will be used for computing various metrics during training. Each set should consist of two files: one with source sentences and one with target sentences (translations). Both files should have the same number of lines, each line containing a single sentence. Each sentence should be a whitespace delimited list of tokens.

Say you wanted to train a German to English translation model, then you would call sockeye like this:

> python -m sockeye.train --source sentences.de \
                       --target sentences.en \
                       --validation-source sentences.dev.de \
                       --validation-target sentences.dev.en \
                       --use-cpu \
                       --output <model_dir>

After training the directory <model_dir> will contain all model artifacts such as parameters and model configuration.

Translate

Input data for translation should be in the same format as the training data (tokenization, preprocessing scheme). You can translate as follows:

> python -m sockeye.translate --models <model_dir> --use-cpu

This will take the best set of parameters found during training and then translate strings from STDIN and write translations to STDOUT.

For more detailed examples check out our user documentation.

About

Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet

License:Apache License 2.0


Languages

Language:Python 99.7%Language:Shell 0.3%