Steven Jack's repositories

Sperker_recognition_629

软件工程作业

Language:PythonStargazers:3Issues:1Issues:0

zhoupeiyuan_Mechanics_competition

周培源力学竞赛资料分享

pandas120

Pandas数据处理120题,原数据来自于天池大数据科研平台

Language:Jupyter NotebookStargazers:2Issues:1Issues:0

ChineseChess-AlphaZero

Implement AlphaZero/AlphaGo Zero methods on Chinese chess.

Language:PythonLicense:GPL-3.0Stargazers:1Issues:0Issues:0

Database-homework

火星叔叔的欢乐假期

Language:HTMLStargazers:1Issues:1Issues:0

hello-world

Just another repository

keras

Deep Learning for humans

Language:PythonLicense:NOASSERTIONStargazers:1Issues:0Issues:0

pacman_homework

AI_homework

Language:PythonStargazers:1Issues:1Issues:0

-Tianchi_winter_charging2021

记录自己寒假的天池AI学习

Stargazers:0Issues:1Issues:0

996.ICU

Repo for counting stars and contributing. Press F to pay respect to glorious developers.

Language:RustLicense:NOASSERTIONStargazers:0Issues:0Issues:0

caffe

Caffe: a fast open framework for deep learning.

Language:C++License:NOASSERTIONStargazers:0Issues:0Issues:0

dockerbook-code

The code and configuration examples from The Docker Book (http://www.dockerbook.com)

Language:RubyStargazers:0Issues:0Issues:0

edX-CS188.1x-Artificial-Intelligence

Projects from the edX (BerkleyX) course: CS188.1x Artificial Intelligence

Stargazers:0Issues:0Issues:0

Huawei-Challenge-Speaker-Identification

Trained speaker embedding deep learning models and evaluation pipelines in pytorch and tesorflow for speaker recognition.

Language:Jupyter NotebookStargazers:0Issues:1Issues:0

learn-python3

Learn Python 3 Sample Code

License:GPL-2.0Stargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

Speech_Signal_Processing_and_Classification

Front-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

Start_Leetcode_caishunzhe

尝试每天写3题,不求多

Language:C++Stargazers:0Issues:1Issues:0

wechat-public-account-push

微信公众号推送-给女朋友的浪漫

Language:JavaScriptLicense:MITStargazers:0Issues:0Issues:0