There are 0 repository under recall-precision topic.
A repo holding the implementation as well as some theoretical explanation of the important relevant concepts. It is going to be in development for a long long time. I'll keep adding things everytime I have something to add to it, and I have the time for it. One can use it to learn the basics of Machine Learning from kind of scratch.
Sentiment analysis is part of the NLP techniques that consists in extracting emotions related to some raw texts.
Information Retrieval with Lucene and CISI dataset. Index documents and search between them with IB, DFR, BM-25, TF-IDF, Boolean, Axiomatic, LM-Dirichlet similarity and calculate Recall, Precision, MAP (Mean Average Precision) and F-Measure
MNIST Handwritten Digits Classification
Support vector machine in medical disease detection. Both linear and non-linear data can be fitted in svm through its kernel specialization In medical we focus on precision or recall rather than accuracy.
Code to detect credit card fraud detecton
Classification machine learning models were trained and used to identify what features contributed to customer churn rate.
To recognize fraudulent credit card transactions so that customers are not charged for items that they did not purchase
In this repository, We are going to be working on upGrad's lead score data set and to see how can we solve the problem using Exploratory data analysis techniques and using supervised machine learning models.
- Nesse trabalho vou explorar uma base vista em projetos passados, diabetes dataset. - Nela encontramos informações sobre algumas características de pacientes. Queremos estudar as características das pacientes e encontrar possíveis relações
Model DecisionTree and evaluation model
machine learning
This is my Hamoye Stage C tag-along project. The notebook focuses on applying Machine Learning Classification models and Measuring Classification Performance.
Machine learning for credit card default. Precision-recalls are calculated due to imbalanced data. Confusion matrices and test statistics are compared with each other based on Logit over and under-sampling methods, decision tree, SVM, ensemble learning using Random Forest, Ada Boost and Gradient Boosting. Easy Ensemble AdaBoost classifier appears to be the model of best fit for the given data.