Fakhre Alam's repositories
Use-Case-Data-Dashboard-Application
Automated tool For including data profiling, handling missing values, validating data, detecting outliers, and training models
GenAI-Question-Ansering-App
Banking Domain Questions Answer Bot
TF-IDF_Document_Search
TF_IDF_Document search functionality using streamlit
ProgrammingAssignment2
Repository for Programming Assignment 2 for R Programming on Coursera
Time-Series
Time series data analysis
MLflow-Pipeline
MLflow is an open-source platform designed to manage the end-to-end machine learning lifecycle. It provides tools for tracking experiments, packaging code into reproducible runs, and sharing and deploying models
Machine-Learning-Automated-Pipeline-Prediction
This project is a comprehensive machine learning solution designed to automate the prediction of optimal pipelines for data processing and model training. It streamlines the end-to-end machine learning workflow, from data preprocessing to model deployment, utilizing automated techniques to enhance efficiency and performance.
KNN_CLASSIFIER_MNIST_DATA
KNN CLASSIFIER ON MNIST DATA
Stock-Market-Prediction
Using News Article to Predict Stock Market Movements
Titanic-classification-comprehensive-modeling
This kernel may (or may not) be helpful in your long and often tedious machine learning journey. This kernel is easily understandable to the beginner like me. This verbosity tries to explain everything I could possibly know. Once you get through the notebook, you can find this useful and straightforward. I attempted to explain things as simple as possible. In this kernel, I'm going to attempt the only Machine learning Algorithms to predict if a passenger survived from the sinking Titanic or not. So it's a binary classification problem. Keep Learning, Fakhre Alam
Data-Wrangling
data_wrangling
Deep-Learning
Python Basics with Numpy (optional assignment) Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need. Instructions: You will be using Python 3. Avoid using for-loops and while-loops, unless you are explicitly told to do so. Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function. After coding your function, run the cell right below it to check if your result is correct. After this assignment you will: Be able to use iPython Notebooks Be able to use numpy functions and numpy matrix/vector operations Understand the concept of "broadcasting" Be able to vectorize code Let's get started!
Qlikview-Components
A library for common Qlikview Scripting tasks
Metadata-Management
Seminar held By Cdac and Manipal
Image-Processing-in-Python
Proceesing Images in Python
voting-classifier_vol2
voting classifier
voting-classifier
I am trying to predict loan outcomes (0, 1) using an unweighted soft voting ensemble classifier (sklearn's VotingClassifier class with voting='soft'). For a given sample, this outputs the class label with highest averaged probability predicted by the component classifiers.
Loan-Data-Analysis-Report
Data Analysis Report
Neural-Network
Introduction The intention of this notebook is to utilize tensorflow to build a neural network that helps to predict default likelihood, and to visualize some of the insights generated from the study. This kernel will evolve over time as I continue to add features and study the Lending Club data
Song-Recommender
Song Recommender Analysis
Titanic-Data-analysis
This notebook is a very basic and simple introductory primer to the method of ensembling models, in particular the variant of ensembling known as Stacking. In a nutshell stacking uses as a first-level (base), the predictions of a few basic machine learning models (classifiers) and then uses another model at the second-level to predict the output from the earlier first-level predictions. The Titanic dataset is a prime candidate for introducing this concept as many newcomers to Kaggle start out here. Furthermore even though stacking has been responsible for many a team winning Kaggle competitions there seems to be a dearth of kernels on this topic so I hope this notebook can fill somewhat of that void. I myself am quite a newcomer to the Kaggle scene as well and the first proper ensembling/stacking script that I managed to chance upon and study was one written in the AllState Severity Claims competition by the great Faron. The material in this notebook borrows heavily from Faron's script although ported to factor in ensembles of classifiers whilst his was ensembles of regressors. Anyway please check out his script here: Stacking Starter : by Faron Now onto the notebook at hand and I hope that it manages to do justice and convey the concept of ensembling in an intuitive and concise manner. My other standalone Kaggle script which implements exactly the same ensembling steps (albeit with different parameters) discussed below gives a Public LB score of 0.808 which is good enough to get to the top 9% and runs just under 4 minutes. Therefore I am pretty sure there is a lot of room to improve and add on to that script. Anyways please feel free to leave me any comments with regards to how I can improve
Document-Retrival-
Document Retrival Analysis
Analyzing-Product-Sentiment-Analyisis
Sentiment Analysis
Deep-Learning-for-Image-Classification-Image-Processing-
Using Deep learnig for Image Classification
Deep-Features-for-Image-Retrieval-Imag-Processing
Deep Learnig for Image Processing and Retrieval