There are 0 repository under k-nearest-neighbor-classifier topic.
Includes top ten must know machine learning methods with R.
This repository contains the Iris Classification Machine Learning Project. Which is a comprehensive exploration of machine learning techniques applied to the classification of iris flowers into different species based on their physical characteristics.
A data driven trade-bot, running on an ensemble of 3 different ML algorithms, generates buy/sell signals of a given asset and timeframe using technical indicators.
Just a simple implementation of K-Nearest Neighbour algorithm.
Fault diagnosis of some critical and non-critical faults in electric drives using Machine Learning.
PCA(Principle Component Analysis) For Seed Dataset in Machine Learning
This project is using Strava's API to download and process my workout data.
Syracuse University, Masters of Applied Data Science - IST 707 Data Analytics
This project focuses on predicting heart disease using the K-Nearest Neighbors (KNN) classification algorithm implemented in a Jupyter Notebook. It aims to provide a tool that can assist in early detection and diagnosis of heart disease based on given input features.
An Open MPI implementation of the well known K-Nearest Neighbors (Machine Learning) classifier.
Fraud detection
This is a Python - based application that predicts diseases based on the symptoms inputted by the user using machine learning (KNN classifier algorithm).
Static and Dynamic Analysis of android malware using various different machine learning algorithms
This project was made as a report for a Big Data Challenge at Satria Data 2020 by IPB University.
This project involves detecting iris species using the k-nearest neighbors (KNN) algorithm in Jupyter Notebook. The iris species detection task is a classic problem in machine learning, where the goal is to classify iris flowers into different species based on their measurements.
Portfolio
Collection of some classical Machine learning Algorithms.
Creating a KNN Classifier with my own code
The dataset used in the development of the method was the open-access Stroke Prediction dataset. It is used to predict whether a patient is likely to get stroke based on the input parameters like age, various diseases, bmi, average glucose level and smoking status. K-nearest neighbor and random forest algorithm are used in the dataset.
Simpsons Members Recognizer Supervised Machine Learning Algorithm.
k-Nearest Neighbors (KNN) used for an Etherium Blockchain classification problem
user-drawn digit recognition program
A basic movie recommendation system, which uses content-based filtering to suggest the top 5 similar movies as per the user's search.
Modelo preditivo, baseado em 6 modelos de classificação binária e multiclasse, capaz de distinguir entre conexões "ruins", que são os ataques, das conexões "boas" ou normais.
This repository contains a Python implementation of a K-Nearest Neighbors (KNN) classifier from scratch. It's applied to the "BankNote_Authentication" dataset, which consists of four features (variance, skew, curtosis, and entropy) and a class attribute indicating whether a banknote is real or forged.
Classify the motion capture from hand postures through supervised learning models
Data Classification using K-Nearest Neighbour Classifier and Bayes Classifier with Unimodal Gaussian Density
Classifying the different types of water based on analysis and used various Machine Learning algorithms to solve this usecase
Performance Comparison of two different distance metrics in K - Nearest Neighbors
Building from scratch simple KNN Classifier without using frameworks built-in functions and applying it on the Pen Digits Dataset.
🌟 Scott Miner's GitHub portfolio showcasing personal projects, coding skills, and expertise in Software Development/Data Analytics/AI/ML. Get in touch for collaboration!
Model built using Classification algorithms to predict diabetes at early stage.
I contributed to a group project using the Life Expectancy (WHO) dataset from Kaggle where I performed regression analysis to predict life expectancy and classification to classify countries as developed or developing. The project was completed in Python using the pandas, Matplotlib, NumPy, seaborn, scikit-learn, and statsmodels libraries. The regression models were fitted on the entire dataset, along with subsets for developed and developing countries. I tested ordinary least squares, lasso, ridge, and random forest regression models. Random forest regression performed the best on all three datasets and did not overfit the training set. The testing set R2 was .96 for the entire dataset and developing country subset. The developed country subset achieved an R2 of .8. I tested seven different classification algorithms to classify a country as developing or developed. The models obtained testing set balanced accuracies ranging from 86% - 99%. From best to worst, the models included gradient boosting, random forest, Adaptive Boosting (AdaBoost), decision tree, k-nearest neighbors, support-vector machines, and naive Bayes. I tuned all the models' hyperparameters. None of the models overfitted the training set.