Atharva Bhave's repositories
Stock_trend_prediction_using_setiment_analysis_of_news_data.
I scrape the news data using webhose.io and stock price data using nsepy. then I labelled the data using information about rising or fall in prices corresponding to news scraped for that day. For the word embeddings, I used GloVe provided by Stanford University. I used stop words from NLTK to remove sentence fillers that do not change the context. Then I used TF-IDF to remove the words that provide the least information. then I used LSTM to map any time-dependent relations in the data set and predicted if a user should "buy" or "sell" the share and with what confidence based on today's news events.
logistic-regression
use logistic regression to predict if the mobile fone will be sold or not.
R-hex
Negotiating difficult terrains has always been a challenging task for wheeled robots. To solve this issue, man has been trying to design various legged robots, taking inspiration from nature. Out of all these legged robots, six legged robots are the most common ones, since 6 legs have an edge over other legged robots considering their stability. In hexapods, generally a tripod gait is in used in which 2 sets of legs(3 each ) are alternatively on the ground. This hexapod is our attempt to tackle the same issue of all terrain driving capabilities. Instead of normal legs we have used C-shaped legs which have an added advantage of climbing obstacles, pretty taller than itself.
Stock-variability-calculator
Code will pull the data from nse regarding the daily increments in closing prices of a given stock and then various statistical parameters will be calculated based on that data
TensorFlow-Book
Accompanying source code for Machine Learning with TensorFlow. Refer to the book for step-by-step explanations.
trip_advisor_sentiment_analysis
Trip Advisor had provided a dataset of its reviews with binary classification label of ‘Happy’ and ‘Not Happy’. For the word embeddings, I used GloVe provided by Stanford University. I used stop words from NLTK to remove sentence fillers that do not change the context. Then I used TF-IDF to remove the words that provide the least information. Then I used 1D convolution neural networks (CNN) to predict whether the statement was a ‘Happy’ or a ‘Not Happy’ one.