qinyao-he / bit-rnn

Quantize weights and activations in Recurrent Neural Networks.

Home Page:https://arxiv.org/abs/1611.10176

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Bit-RNN

Source code for paper: Effective Quantization Methods for Recurrent Neural Networks.

The implementation of PTB language model is modified from examples in tensorflow.

Requirments

Currently tested and run on TensorFlow 1.8 and Python 3.6. View other branches for legacy support. You may download the data from http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz.

Run

python train.py --config=config.gru --data_path=YOUR_DATA_PATH

Currently default is 2-bit weights and activations. You may edit the config file in config folder to change configuration.

Support

Submit issue for problem relate to the code itself. Send email to the author for general question about the paper.

Citation

Please cite follow if you use our code in your research:

@article{DBLP:journals/corr/HeWZWYZZ16,
  author    = {Qinyao He and
               He Wen and
               Shuchang Zhou and
               Yuxin Wu and
               Cong Yao and
               Xinyu Zhou and
               Yuheng Zou},
  title     = {Effective Quantization Methods for Recurrent Neural Networks},
  journal   = {CoRR},
  volume    = {abs/1611.10176},
  year      = {2016},
  url       = {http://arxiv.org/abs/1611.10176},
  timestamp = {Thu, 01 Dec 2016 19:32:08 +0100},
  biburl    = {http://dblp.uni-trier.de/rec/bib/journals/corr/HeWZWYZZ16},
  bibsource = {dblp computer science bibliography, http://dblp.org}
}

About

Quantize weights and activations in Recurrent Neural Networks.

https://arxiv.org/abs/1611.10176

License:Apache License 2.0


Languages

Language:Python 100.0%