Prashant Wakchaure (prashant900555)

prashant900555

Geek Repo

Company:Oliver Wyman Labs, Dublin

Location:Dublin, Ireland

Github PK Tool:Github PK Tool

Prashant Wakchaure's repositories

Java-bus-ticketing-system

Install the software program in any drive except C. [Note: Please don't install the software in C drive or your respective Program files drive.] Thanks & Regards. - Prashant Wakchaure

ML-svm-red-wine-quality

Testing the quality of red wine as Good or Bad according to the given parameters from the Kaggle Dataset by using Support Vector Machine supervised ML Algorithm.

Language:Jupyter NotebookStargazers:3Issues:1Issues:0

back-end-task

REST APIs server using Node.js, Express.js, Mongoose & Graphql

Language:JavaScriptStargazers:1Issues:1Issues:0

Java-Multiple-Connects

2D GUI game implemented from scratch using JavaFX open-source desktop application software platform.

Language:JavaStargazers:1Issues:0Issues:0

Java-sparsh-hospital-management-system

Install the software program in any drive except C. [Note: Please don't install the software in C drive or your respective Program files drive.] Thanks & Regards. - Prashant Wakchaure

Stargazers:1Issues:0Issues:0

ML-Diabetes-Prevalance-Rate

The objective of this question is to gain insight from a dataset released by Tesco store, a large supermarket chain in UK. The dataset describes the purchasing behaviour of shoppers aggregated at the Ward level. It describes the fraction of different product types in the overall shopping basket. The features in this dataset are a subset of the Tesco dataset available at https://figshare.com/articles/dataset/Arealevel_ grocery_purchases/7796666 and you can find the description of the various fields there. The last column in this dataset is a categorical feature that captures the diabetes prevalence rate in the ward and your task is to predict this categorical feature based on the features derived from the shopping behaviour.

Language:Jupyter NotebookStargazers:1Issues:0Issues:0

R-Analysis-Brazilian-E-Commerce-Dataset

For this task, I went ahead and chose the "Brazilian E-Commerce Public Dataset by Olist" dataset from Kaggle: https://www.kaggle.com/olistbr/brazilian-ecommerce. It also suffices the minimum requirement of 2 categorical and 3 numerical variables. I'll perform various types of analysis on the dataset to infer out significant results in the forms of summaries, dataframes, tables and numerous plots. In the end, I'll also demonstrate the correlation between the numeric variables from the dataset.

Stargazers:1Issues:0Issues:0

Diagnosis-of-Pneumonia-using-Chest-X-Ray-Images

This paper demonstrates the task of classifying Chest X-Ray images as either Normal or having Pneumonia. This task is avidly famous on the Kaggle platform as Chest X-Ray Images (Pneumonia). The manuscript proposes 2 Deep Learning Architectural Neural Network Models, one; is a CNN Model, which I call the X-Ray CNN; which is tuned and trained from scratch, while the other is the famous ResNet50 Transfer Learning Model.

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

ML-US-Census-data

The objective of this question is to use the ensemble learning functionality to identify the extent to which classification performance can be improved through the combination of multiple models. Experiments will be run on a dataset extracted from US Census data. The data contains 14 attributes including age, race, sex, marital status etc, and the goal is to predict whether the individual earns over $50k per year.

Language:Jupyter NotebookStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0

News-Category-Text-Classification

The objective of this assignment is to scrape a corpus of news stories from a set of web pages, pre-process the data, and evaluate the performance of both binary and multi-label text classification algorithms on the data.

Language:Jupyter NotebookStargazers:0Issues:1Issues:0

OMDB-Movies-Data-Analysis

For this Assignment, I chose the Open Movie Database (OMDB), because it is one of the best movie api's available, which not only has information about a particular movie, but also has several international online ratings along with the awards won and other informational columns like the runtime, boxoffice collection, and so on; using which I'll be able to analyse the data and infer relative information from it about the cinema industry around the globe.

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

prashant900555

Config files for my GitHub profile.

Stargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

SQL-RDBMS-for-Wonka-Labs

This project report envisions the practical implementation of the whitepaper which was leaked from the laboratories of the notoriously secretive William P. Wonka III, grandson of that most famed Wonka of all, the founder of Wonka industries and the creator of the world’s most delectable candies. Wonka has suffered miserable amount of reputational damage, in spite of which, his grandson is about to recast the Wonka brand by amplifying its reach into the food ventures. These ventures extraordinarily encompass the savory baked goods and adult beverages. We will see what these are in the later sections. There are multiple teams handling the various kinds of strands which the Wonka innovation pipeline has defined. The teams include, various scientists, marketing analysts, data analysts, and many more. I have been appointed as a DBA (Database Administrator), wherein I’m responsible for creating the RDBMS bedrock in order to brace the strands offered by the Wonka Laboratories. After reading the 6-page white paper, I realized I have a lot of constraints, definitions, normalizations, procedural elements, and what not… to take care off; for which I thought about using MySQL Workbench to initiate the design of the Wonka Labs database schema consisting multiple strands. While designing the schema, the most important thing which I kept in mind was interpretability of the SQL queries, defined tables, views, procedural elements, etc. Even though the notion of my database schema might seem gigantic, I have conspicuously taken the data normalization process very whole-heartedly, because of which there are a lot of tables; but they are evident proof of how efficient, and easy-query processing the entities of the database are.

Stargazers:0Issues:0Issues:0