Daniel Meng Liu's repositories
Mel-Relative-Phase
relative phase feature for speech processing
asv-subtools
An Open Source Tools for Speaker Recognition
faceswap_pytorch
Deep fake ready to train on any 2 pair dataset with higher resolution
Anti-spoofing-papers
anti-spoofing papers
asvspoof2017
some scripts for asvspoof2017
AV-sync
Python implementation of the paper " Dynamic Temporal Alignment of Speech to Lips"
DanielMengLiu.github.io
Meng Liu's Academic Personal Homepage
g2p-seq2seq
G2P with Tensorflow
speaker-embedding-with-phonetic-information
The code for the Interspeech paper "Speaker Embedding Extraction with Phonetic Information"
tf-kaldi-speaker
Neural speaker recognition/verification system based on Kaldi and Tensorflow
cmudict
CMU US English Dictionary
espnet
End-to-End Speech Processing Toolkit
GMM-UBM_MAP_SV
Python code for training and testing of GMM-UBM and maximum a posterirori (MAP) adaptation based speaker verification
hmmlearn
Hidden Markov Models in Python, with scikit-learn like API
LipNet-PyTorch
The state-of-art PyTorch implementation of the method described in the paper "LipNet: End-to-End Sentence-level Lipreading" (https://arxiv.org/abs/1611.01599)
Lipreading-DenseNet3D
DenseNet3D Model In "LRW-1000: A Naturally-Distributed Large-Scale Benchmark for Lip Reading in the Wild", https://arxiv.org/abs/1810.06990
Lipreading-ResNet
Torch code for using Residual Networks with LSTMs for Lipreading
Lipreading_using_Temporal_Convolutional_Networks
ICASSP'20 Lipreading using Temporal Convolutional Networks
MIM-lipreading
Code and model for paper <Mutual Information Maximization for Effective Lip Reading>
numpy_exercises
Numpy exercises.
python-vad
py-webrtcvad wrapper for trimming speech clips
pytorch-ivectors
GPU accelerated implementation of i-vector extractor training using PyTorch. Requires Kaldi for feature extraction and UBM training. An example script is provided for VoxCeleb data.
PyTorch_Speaker_Verification
PyTorch implementation of "Generalized End-to-End Loss for Speaker Verification" by Wan, Li et al.