linzai1992 / tensorflow-ctc-speech-recognition

Application of Connectionist Temporal Classification (CTC) for Speech Recognition (Tensorflow 1.0).

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Tensorflow CTC Speech Recognition

  • Application of Connectionist Temporal Classification (CTC) for Speech Recognition (Tensorflow 1.0)
  • On the VCTK Corpus (same corpus as the one used by WaveNet).

How to get started?

git clone https://github.com/philipperemy/tensorflow-ctc-speech-recognition.git ctc-speech
cd ctc-speech
sudo pip3 install -r requirements.txt
# Download the VCTK Corpus here: http://homepages.inf.ed.ac.uk/jyamagis/page3/page58/page58.html
wget http://homepages.inf.ed.ac.uk/jyamagis/release/VCTK-Corpus.tar.gz # 10GB!
python3 generate_audio_cache.py
python3 ctc_tensorflow_example.py # to run the experiment defined in the section First Experiment.

Requirements

  • dill: improved version of pickle
  • librosa: library to interact with audio wav files
  • namedtupled: dictionary to named tuples
  • numpy: scientific library
  • python_speech_features: extracting relevant features from raw audio data
  • tensorflow: machine learning library
  • progressbar2: progression bar

First experiment

Set up

Speech Recognition is a very difficult topic. In this first experiment, we consider:

  • A very small subset of the VCTK Corpus composed of only one speaker: p225.
  • Only 5 sentences of this speaker, denoted as: 001, 002, 003, 004 and 005.

The network is defined as:

  • One LSTM layer rnn.LSTMCell with 100 units, completed by a softmax.
  • Batch size of 1.
  • Momentum Optimizer with learning rate of 0.005 and momentum of 0.9.

The validation set is obtained by constantly truncating the audio files randomly at the beginning (between 0 and 125ms max). We make sure that we do not cut when the speaker is speaking. Using 5 unseen sentences would be more realistic, however, it's almost impossible for the network to pick it up since a training set of only 5 sentences is way too small to cover all the possible phonemes of the english language. By truncating randomly the silences at the beginning, we make sure that the network does not learn the mapping audio from sentence -> text in a dumb way.

Results

Most of the time, the network can guess the correct sentence. Sometimes, it misses a bit but still encouraging.

Example 1

Original training: diving is no part of football
Decoded training: diving is no part of football
Original validation: theres still a bit to go
Decoded validation: thers still a bl to go
Epoch 3074/10000, train_cost = 0.032, train_ler = 0.000, val_cost = 9.131, val_ler = 0.125, time = 1.648

Example 2

Original training: three hours later the man was free
Decoded training: three hours later the man was free
Original val: and they were being paid 
Decoded val: nand they ere being paid  
Epoch 3104/10000, train_cost = 0.075, train_ler = 0.000, val_cost = 2.945, val_ler = 0.077, time = 1.042

Example 3

Original training: theres still a bit to go
Decoded training: theres still a bit to go
Original val: three hours later the man was free
Decoded val: three hors late th man wasfree
Epoch 3108/10000, train_cost = 0.032, train_ler = 0.000, val_cost = 12.532, val_ler = 0.118, time = 0.859

CTC Loss

CTC Loss (Log scale)

CTC Loss is the raw loss defined in the paper by Alex Graves.

LER Loss

LER (Label Error Rate) measures the inaccuracy between the predicted and the ground truth texts.

Clearly we can see that the network learns very well on just 5 sentences! It's far from being perfect but quite appealing for a first try.

Special Thanks

About

Application of Connectionist Temporal Classification (CTC) for Speech Recognition (Tensorflow 1.0).

License:Apache License 2.0


Languages

Language:Python 100.0%