There are 1 repository under kernel-pca topic.
UnSupervised and Semi-Supervise Anomaly Detection / IsolationForest / KernelPCA Detection / ADOA / etc.
Front-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].
In This repository I made some simple to complex methods in machine learning. Here I try to build template style code.
Application of Deep Learning and Feature Extraction in Software Defect Prediction
Here I've demonstrated how and why should we use PCA, KernelPCA, LDA and t-SNE for dimensionality reduction when we work with higher dimensional datasets.
The code for Principal Component Analysis (PCA), dual PCA, Kernel PCA, Supervised PCA (SPCA), dual SPCA, and Kernel SPCA
Re-Implementation of GPLVM algorithm & performance assessment against Kernel-PCA
Source Code & Datasets for "Vertical Federated Principal Component Analysis and Its Kernel Extension on Feature-wise Distributed Data"
Implementation of Bayesian PCA [Bishop][1999] And Bayesian Kernel PCA
Application of principal component analysis capturing non-linearity in the data using kernel approach
Performed different tasks such as data preprocessing, cleaning, classification, and feature extraction/reduction on wine dataset.
The code for Image Structural Component Analysis (ISCA) and Kernel ISCA
Python package for plug and play dimensionality reduction techniques and data visualization in 2D or 3D.
5th semester project concerning feature engineering and nonlinear dimensionality reduction in particular.
My Machine Learning course projects
Data Science Portfolio
Unsupervised machine learning algorithm. Classical and kernel methods for non-linearly seperable data.
Notes, homework and project for PSU's STAT 672 Winter 2020
Machine learning algorithms done from scratch in Python with Numpy/Scipy
Continuation of my machine learning works based on Subjects....starting with Evaluating Classification Models Performance
Repository for the code of the "Introduction to Machine Learning" (IML) lecture at the "Learning & Adaptive Systems Group" at ETH Zurich.
K-means, Spectral clustering, PCA, and Kernel PCA
UML dimensionality reduction and clustering models for predicting if a banknote is genuine or not based on the dataset from OpenML containing wavelet analysis results for genuine and forged banknotes - practical exercise. (Python 3)
Winning one of the DACON competition
This repository explores the interplay between dimensionality reduction techniques and classification algorithms in the realm of breast cancer diagnosis. Leveraging the Breast Cancer Wisconsin dataset, it assesses the impact of various methods, including PCA, Kernel PCA, LLE, UMAP, and Supervised UMAP, on the performance of a Decision Tree.
Machine Learning assignments from coursework.
Analyzing and overcoming the curse of dimensionality and exploring various gradient descent techniques with implementations in R
Houses a series of projects I worked on for a course in Data Mining that I took in my Ph.D. Data Science program at UTEP in the Fall of 2022. Covers areas such as Regularized Logistic Regression, Optimization, Kernel Methods, PageRank, Kernel PCA, Association Rule Mining, Anomaly Detection, Parametric/Nonparametric Nonlinear Regression, etc.
This repository is dedicated to the lab activities of the course of Unsupervised Learning @UniTs
This repository contains the Python code my blog post Image denoising techniques: A comparison of PCA, kernel PCA, autoencoder, and CNN. See post for more details and results.
Applying NLP methods and kernel PCA on news dataset to build a clustering model