blues-green

blues-green

User data from Github https://github.com/blues-green

GitHub:@blues-green

blues-green's repositories

BTC-ISMIR19

"A Bi-Directional Transformer for Musical Chord Recognition" accepted on ISMIR2019

Language:PythonLicense:MITStargazers:1Issues:1Issues:0

Audio-auto-tagging

Convolutional Neural Network for auto-tagging of audio clips on MagnaTagATune dataset

Language:PythonLicense:MITStargazers:0Issues:0Issues:0
Language:PythonLicense:MITStargazers:0Issues:1Issues:0

Auralisation

Auralisation of learned features in CNN (for audio)

Language:PythonStargazers:0Issues:1Issues:0

ChangeVoice

NDK语音消息的变声处理

Language:CStargazers:0Issues:1Issues:0

deep-learning-HAR

Convolutional and LSTM networks to classify human activity

Language:Jupyter NotebookStargazers:0Issues:1Issues:0

hosts

:statue_of_liberty:最新可用的google hosts文件。国内镜像:

Language:RascalLicense:MITStargazers:0Issues:1Issues:0

hosts-1

镜像:https://coding.net/u/scaffrey/p/hosts/git

Language:RascalLicense:NOASSERTIONStargazers:0Issues:1Issues:0
Language:PythonLicense:MITStargazers:0Issues:0Issues:0

magenta

Magenta: Music and Art Generation with Machine Intelligence

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

MidiNet

This repository contains the source code of MdidNet : A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation

Language:PythonStargazers:0Issues:1Issues:0
Language:PythonStargazers:0Issues:1Issues:0

musegan

An AI for Music Generation

Language:PythonLicense:MITStargazers:0Issues:1Issues:0
Language:C++License:MITStargazers:0Issues:1Issues:0

musicautobot

Using deep learning to generate music in MIDI format.

Language:Jupyter NotebookLicense:MITStargazers:0Issues:1Issues:0

ontts

科大讯飞语音linux在线语音合成后台服务

Language:GoStargazers:0Issues:1Issues:0

panotti

A multi-channel neural network audio classifier using Keras

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

pretty-midi

Utility functions for handling MIDI data in a nice/intuitive way.

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

pyenv-win

pyenv for Windows. pyenv is a simple python version management tool. It lets you easily switch between multiple versions of Python. It's simple, unobtrusive, and follows the UNIX tradition of single-purpose tools that do one thing well.

Language:VBScriptLicense:MITStargazers:0Issues:1Issues:0

python-Speech_Recognition

A simple example for use speech recognition baidu api with python.

Language:PythonStargazers:0Issues:1Issues:0

pyvad

VAD(Voice Activity Detector) python 实现对时时读入的流式数据进行端点检测

Language:PythonStargazers:0Issues:1Issues:0

remi

"Pop Music Transformer: Generating Music with Rhythm and Harmony", arXiv 2020

Language:PythonLicense:GPL-3.0Stargazers:0Issues:1Issues:0

Speech_Signal_Processing_and_Classification

Front-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].

Language:PythonStargazers:0Issues:1Issues:0

spider163

抓取网易云音乐热门评论

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

transformers

🤗 Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

VGG16CAM-keras

Keras implementation of the VGG16-CAM model

Language:PythonStargazers:0Issues:0Issues:0

XunFeiDemo

本Demo是科大讯飞SDK的简单Demo,展示了语音识别和语音合成功能的使用。

Language:JavaStargazers:0Issues:1Issues:0