Amos_CCH (AmosCch)

AmosCch

Geek Repo

Location:Shanghai

Github PK Tool:Github PK Tool

Amos_CCH's repositories

percepnet

percepnet implemented using Keras, still need to be optimized and tuned.

Language:CLicense:BSD-3-ClauseStargazers:1Issues:0Issues:0

BeamformIt

BeamformIt acoustic beamforming software

Language:C++Stargazers:0Issues:0Issues:0

cheatsheets

Official Matplotlib cheat sheets

Language:PythonLicense:BSD-2-ClauseStargazers:0Issues:0Issues:0

Dive-into-DL-PyTorch

本项目将《动手学深度学习》(Dive into Deep Learning)原书中的MXNet实现改为PyTorch实现。

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

DNS-Challenge

This repo contains the scripts, models, and required files for the ICASSP 2021 Deep Noise Suppression (DNS) Challenge.

License:CC-BY-4.0Stargazers:0Issues:0Issues:0

examples

A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

pb_chime5

Speech enhancement system for the CHiME-5 dinner party scenario

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

pyroomacoustics

Pyroomacoustics is a package for audio signal processing for indoor applications. It was developed as a fast prototyping platform for beamforming algorithms in indoor scenarios.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

pytorch-tutorial

PyTorch Tutorial for Deep Learning Researchers

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

rnnoise

Recurrent neural network for audio noise reduction

Language:CLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

svoice

We provide a PyTorch implementation of the paper Voice Separation with an Unknown Number of Multiple Speakers In which, we present a new method for separating a mixed audio sequence, in which multiple voices speak simultaneously. The new method employs gated neural networks that are trained to separate the voices at multiple processing steps, while maintaining the speaker in each output channel fixed. A different model is trained for every number of possible speakers, and the model with the largest number of speakers is employed to select the actual number of speakers in a given sample. Our method greatly outperforms the current state of the art, which, as we show, is not competitive for more than two speakers.

License:NOASSERTIONStargazers:0Issues:0Issues:0
License:MITStargazers:0Issues:0Issues:0

webrtc

WebRTC sub-repo dependency for WebRTC SDK

License:BSD-3-ClauseStargazers:0Issues:0Issues:0

WebRTC-3A1V

AEC, AGC, ANS, VAD, CNG in WebRTC

Stargazers:0Issues:0Issues:0