nkmeng / sru

Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About

SRU is a recurrent unit that can run over 10 times faster than cuDNN LSTM, without loss of accuracy tested on many tasks.


Average processing time of LSTM, conv2d and SRU, tested on GTX 1070

For example, the figure above presents the processing time of a single mini-batch of 32 samples. SRU achieves 10 to 16 times speed-up compared to LSTM, and operates as fast as (or faster than) word-level convolution using conv2d.

Reference:

Training RNNs as Fast as CNNs

@article{lei2017sru,
  title={Training RNNs as Fast as CNNs},
  author={Tao Lei, Yu Zhang and Yoav Artzi},
  journal={arXiv preprint arXiv:1709.02755},
  year={2017}
}

Requirements

Install requirements via pip install -r requirements.txt. CuPy and pynvrtc needed to compile the CUDA code into a callable function at runtime. Only single GPU training is supported.


Examples

The usage of SRU is similar to nn.LSTM. SRU likely requires more stacking layers than LSTM. We recommend starting by 2 layers and use more if necessary (see our report for more experimental details).

import torch
from torch.autograd import Variable
from cuda_functional import SRU, SRUCell

# input has length 20, batch size 32 and dimension 128
x = Variable(torch.FloatTensor(20, 32, 128).cuda())

input_size, hidden_size = 128, 128

rnn = SRU(input_size, hidden_size,
    num_layers = 2,          # number of stacking RNN layers
    dropout = 0.0,           # dropout applied between RNN layers
    rnn_dropout = 0.0,       # variational dropout applied on linear transformation
    use_tanh = 1,            # use tanh?
    use_relu = 0,            # use ReLU?
    use_selu = 0,            # use SeLU?
    bidirectional = False,   # bidirectional RNN ?
    weight_norm = False,     # apply weight normalization on parameters
    layer_norm = False,      # apply layer normalization on the output of each layer
    highway_bias = 0         # initial bias of highway gate (<= 0)
)
rnn.cuda()

output_states, c_states = rnn(x)      # forward pass

# output_states is (length, batch size, number of directions * hidden size)
# c_states is (layers, batch size, number of directions * hidden size)

Make sure cuda_functional.py and the shared library cuda/lib64 can be found by the system, e.g.

export LD_LIBRARY_PATH=/usr/local/cuda/lib64
export PYTHONPATH=path_to_repo/sru

Instead of using PYTHONPATH, the SRU module now can be installed as a regular package via python setup.py install or pip install. See this PR.



Contributors

https://github.com/taolei87/sru/graphs/contributors

Other Implementations

@musyoku had a very nice SRU implementaion in chainer.

@adrianbg implemented the CPU version.


To-do

  • ReLU activation
  • support multi-GPU via nn.DataParallel (see example here)
  • layer normalization
  • weight normalization
  • SeLU activation
  • residual
  • support packed sequence

About

Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)

License:MIT License


Languages

Language:Python 80.6%Language:Shell 19.4%