tingweichen (ctwgL)

ctwgL

Geek Repo

Company:Shanghai University

Location:wuhan China

Github PK Tool:Github PK Tool

tingweichen's repositories

webrtc_agc2

demo for webrtc agc2

Language:MakefileLicense:Apache-2.0Stargazers:28Issues:3Issues:2

webrtc-beamforming

整理出来的webrtc波束模块

AI-Expert-Roadmap

Roadmap to becoming an Artificial Intelligence Expert in 2021

Language:JavaScriptLicense:MITStargazers:0Issues:0Issues:0

annotated_deep_learning_paper_implementations

🧑‍🏫 50! Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

ant-design

An enterprise-class UI design language and React UI library

Language:TypeScriptLicense:MITStargazers:0Issues:0Issues:0
Language:CLicense:Apache-2.0Stargazers:0Issues:0Issues:0

awesome-audio-visual

A curated list of different papers and datasets in various areas of audio-visual processing

Stargazers:0Issues:1Issues:0

Awesome-Speech-Enhancement

A tutorial for Speech Enhancement researchers and practitioners. The purpose of this repo is to organize the world’s resources for speech enhancement and make them universally accessible and useful.

License:MITStargazers:0Issues:0Issues:0

awesome-speech-enhancement-1

speech enhancement\speech seperation\sound source localization

Stargazers:0Issues:1Issues:0

awesome-speech-recognition-speech-synthesis-papers

Speech synthesis, voice conversion, self-supervised learning, music generation,Automatic Speech Recognition, Speaker Verification, Speech Synthesis, Language Modeling

License:MITStargazers:0Issues:1Issues:0

d2l-mindspore

《动手学深度学习》的MindSpore实现。供MindSpore学习者配合李沐老师课程使用。

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

EmoSphere-TTS

The official implementation of EmoSphere-TTS

Stargazers:0Issues:0Issues:0

espnet

End-to-End Speech Processing Toolkit

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

evalml

EvalML is an AutoML library written in python.

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

FAcodec

Training code for FAcodec presented in NaturalSpeech3

Stargazers:0Issues:0Issues:0

FNet-pytorch

Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

free-programming-books-zh_CN

:books: 免费的计算机编程类中文书籍,欢迎投稿

License:GPL-3.0Stargazers:0Issues:1Issues:0

FullSubNet

PyTorch implementation of "FullSubNet: A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement."

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

generative-ai-for-beginners

12 Lessons, Get Started Building with Generative AI 🔗 https://microsoft.github.io/generative-ai-for-beginners/

License:MITStargazers:0Issues:0Issues:0

ISCLP-KF

Integrated sidelobe cancellation and linear prediction Kalman filter for joint multi-microphone speech dereverberation, interfering speech cancellation, and noise reduction.

Language:MATLABLicense:GPL-3.0Stargazers:0Issues:1Issues:0

NeMo

NeMo: a toolkit for conversational AI

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:1Issues:0
Stargazers:0Issues:0Issues:0

pytorch-template

PyTorch deep learning projects made easy.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

setk

Tools for Speech Enhancement integrated with Kaldi

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

SpeechAlgorithms

Speech Algorithms Collections

Language:MATLABLicense:Apache-2.0Stargazers:0Issues:1Issues:0

speechbrain

A PyTorch-based Speech Toolkit

License:Apache-2.0Stargazers:0Issues:0Issues:0

svoice

We provide a PyTorch implementation of the paper Voice Separation with an Unknown Number of Multiple Speakers In which, we present a new method for separating a mixed audio sequence, in which multiple voices speak simultaneously. The new method employs gated neural networks that are trained to separate the voices at multiple processing steps, while maintaining the speaker in each output channel fixed. A different model is trained for every number of possible speakers, and the model with the largest number of speakers is employed to select the actual number of speakers in a given sample. Our method greatly outperforms the current state of the art, which, as we show, is not competitive for more than two speakers.

Language:PythonLicense:NOASSERTIONStargazers:0Issues:1Issues:0

tinyrecurrentunet

Real-Time De-noising and De-reverbing with Tiny Recurrent UNet

Stargazers:0Issues:0Issues:0

unified2021

A UNIFIED SPEECH ENHANCEMENT FRONT-END FOR ONLINE DEREVERBERATION, ACOUSTIC ECHO CANCELLATION, AND SOURCE SEPARATION

Stargazers:0Issues:0Issues:0