Muhammad Sarmad (msarmad17)

msarmad17

Geek Repo

Company:Intuitive Solutions

Location:Tampa, Florida

Home Page:https://www.linkedin.com/in/muhammad-sarmad/

Github PK Tool:Github PK Tool

Muhammad Sarmad's repositories

Wheelchair-Distance-Module

An Arduino sketch to calculate distance traveled by a wheelchair, and angle turned (if any) using Hall effect sensors and magnets mounted on the wheelchair.

Language:C++License:MITStargazers:3Issues:0Issues:0

Coursera_Capstone

This repository is for the IBM Data Science Capstone project on Coursera

Language:Jupyter NotebookStargazers:0Issues:1Issues:0
Language:Jupyter NotebookStargazers:0Issues:1Issues:0

CS-7641_A2

Project 2 for the Summer 2024 session of CS-7641 at Georgia Tech

Language:Jupyter NotebookStargazers:0Issues:1Issues:0

CS-7641_A3

Project 3 for the Summer 2024 session of CS-7641 at Georgia Tech

Language:Jupyter NotebookStargazers:0Issues:1Issues:0

Jumping-Jim-Graph-Search

A coding project based on DFS to solve the Jumping Jim maze problem, whereby we need to find a path from the starting point to the exit point.

Language:C++License:GPL-3.0Stargazers:0Issues:1Issues:0

k-means-in-Spark

A basic k-means implementation in pyspark using RDDs. The used points are latitude and longitude coordinates based on IP addresses of accesses to a web server.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

Movie-Review-Classifier

A naive-bayes classifier to classify movie reviews as either positive (true) or negative (false)

Language:PythonLicense:GPL-3.0Stargazers:0Issues:1Issues:0

Spell-Checker

A spell checker implementation based on a revision to the Levenshtein edit distance algorithm to optimize it for QWERTY keypads

License:GPL-3.0Stargazers:0Issues:1Issues:0

Sports-Goods-Website

A sports goods website for a database project, with web pages implemented using the Python Flask library, and a Microsoft SQL server database.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

Subreddit-Classifier

This is the code for a model I trained on a training set containing comments from four subreddits. The training data contains 19,600 comments which are unclean, containing all sorts of typos and slang. I first cleaned the comments, then used word2vec to calculate the sum of word vectors for each comment. Then, I used TF-IDF and cosine similarity to find words that are common across each subreddit and relevant to its topic. Taking these common relevant words from each subreddit, I checked if each comment contained any of these words, appending a value of 1 for a word (dimension/feature) if it was present or a value of 0 if it was not. The final vectors have ~390 dimensions (300 from the word2vec representations, and ~90 from the TF-IDF add-on words). Sklearn's Logistic Regression model when trained on this training data, achieved a 78.5% accuracy when classifying comments from the testing set to one of the four subreddits.

Language:Jupyter NotebookLicense:GPL-3.0Stargazers:0Issues:1Issues:0

Thanksgiving-Sentiment-Analysis

Sentiment analysis conducted on tweets collected during the week of Thanksgiving 2018, using the Twitter API. Initial analysis was performed using the nlt.sentiment.vader module provided as part of NLTK in Python, to calculate average positivity and negativity for each day of week. Later, user-hashtag weighted bipartite graphs were created using networkx to relate average sentiment to the actual use of Thanksgiving related hashtags.

Language:PythonStargazers:0Issues:0Issues:0