End-to-end version of lattice-free MMI (LF-MMI or chain model) implemented in PyTorch. TODO: regular version of LF-MMI.
- Download and install OpenFST
- Install it by:
./configure --prefix=`pwd` --enable-static --enable-shared --enable-ngram-fsts CXX="g++" LIBS="-ldl" CPPFLAGS="-D_GLIBCXX_USE_CXX11_ABI=0" CXXFLAGS="-D_GLIBCXX_USE_CXX11_ABI=0"
make
make install
Note that the option -D_GLIBCXX_USE_CXX11_ABI=0
must be compatible with the
option used when compiling PyTorch. Details here.
- Path
export OPENFST_PATH=<your_dir>/openfst;
export LD_LIBRARY_PATH=<your_dir>/openfst/lib:$LD_LIBRARY_PATH;
cd openfst_binding
python setup.py install
cd ..
cd pytorch_binding
python setup.py install
cd ..
- "End-to-end speech recognition using lattice-free MMI", Hossein Hadian, Hossein Sameti, Daniel Povey, Sanjeev Khudanpur, Interspeech 2018 (pdf)
- "Purely sequence-trained neural networks for ASR based on lattice-free MMI", Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pegah Ghahrmani, Vimal Manohar, Xingyu Na, Yiming Wang and Sanjeev Khudanpur, Interspeech 2016, (pdf) (slides,pptx)
The code is a modification of the version in the kaldi repository with no dependency on the kaldi base.