Bibhuti Bhsan Sahoo (Bibhuti5)

Bibhuti5

Geek Repo

Company:Capgemini

Location:Bangalore

Home Page:https://bibhutiport.blogspot.com/

Github PK Tool:Github PK Tool

Bibhuti Bhsan Sahoo's repositories

Potato-Disease-Classification

Potato Disease Classification Setup for Python: Install Python (Setup instructions) Install Python packages pip3 install -r training/requirements.txt pip3 install -r api/requirements.txt Install Tensorflow Serving (Setup instructions) Setup for ReactJS Install Nodejs (Setup instructions) Install NPM (Setup instructions) Install dependencies cd frontend npm install --from-lock-json npm audit fix Copy .env.example as .env. Change API url in .env. Setup for React-Native app Initial setup for React-Native app(Setup instructions) Install dependencies cd mobile-app yarn install cd ios && pod install && cd ../ Copy .env.example as .env. Change API url in .env. Training the Model Download the data from kaggle. Only keep folders related to Potatoes. Run Jupyter Notebook in Browser. jupyter notebook Open training/potato-disease-training.ipynb in Jupyter Notebook. In cell #2, update the path to dataset. Run all the Cells one by one. Copy the model generated and save it with the version number in the models folder. Running the API Using FastAPI Get inside api folder cd api Run the FastAPI Server using uvicorn uvicorn main:app --reload --host 0.0.0.0 Your API is now running at 0.0.0.0:8000 Using FastAPI & TF Serve Get inside api folder cd api Copy the models.config.example as models.config and update the paths in file. Run the TF Serve (Update config file path below) docker run -t --rm -p 8501:8501 -v C:/Code/potato-disease-classification:/potato-disease-classification tensorflow/serving --rest_api_port=8501 --model_config_file=/potato-disease-classification/models.config Run the FastAPI Server using uvicorn For this you can directly run it from your main.py or main-tf-serving.py using pycharm run option (as shown in the video tutorial) OR you can run it from command prompt as shown below, uvicorn main-tf-serving:app --reload --host 0.0.0.0 Your API is now running at 0.0.0.0:8000 Running the Frontend Get inside api folder cd frontend Copy the .env.example as .env and update REACT_APP_API_URL to API URL if needed. Run the frontend npm run start Running the app Get inside mobile-app folder cd mobile-app Copy the .env.example as .env and update URL to API URL if needed. Run the app (android/iOS) npm run android or npm run ios Creating the TF Lite Model Run Jupyter Notebook in Browser. jupyter notebook Open training/tf-lite-converter.ipynb in Jupyter Notebook. In cell #2, update the path to dataset. Run all the Cells one by one. Model would be saved in tf-lite-models folder. Deploying the TF Lite on GCP Create a GCP account. Create a Project on GCP (Keep note of the project id). Create a GCP bucket. Upload the tf-lite model generate in the bucket in the path models/potato-model.tflite. Install Google Cloud SDK (Setup instructions). Authenticate with Google Cloud SDK. gcloud auth login Run the deployment script. cd gcp gcloud functions deploy predict_lite --runtime python38 --trigger-http --memory 512 --project project_id Your model is now deployed. Use Postman to test the GCF using the Trigger URL. Inspiration: https://cloud.google.com/blog/products/ai-machine-learning/how-to-serve-deep-learning-models-using-tensorflow-2-0-with-cloud-functions Deploying the TF Model (.h5) on GCP Create a GCP account. Create a Project on GCP (Keep note of the project id). Create a GCP bucket. Upload the tf .h5 model generate in the bucket in the path models/potato-model.h5. Install Google Cloud SDK (Setup instructions). Authenticate with Google Cloud SDK. gcloud auth login Run the deployment script. cd gcp gcloud functions deploy predict --runtime python38 --trigger-http --memory 512 --project project_id Your model is now deployed. Use Postman to test the GCF using the Trigger URL. Inspiration: https://cloud.google.com/blog/products/ai-machine-learning/how-to-serve-deep-learning-models-using-tensorflow-2-0-with-cloud-functions

Language:Jupyter NotebookStargazers:5Issues:0Issues:0
Language:Jupyter NotebookStargazers:1Issues:0Issues:0

Machine-Learning

All the machine learning program with code is here.

Language:Jupyter NotebookStargazers:1Issues:0Issues:0

Resume-Screening-with-Python

What is Resume Screening? Hiring the right talent is a challenge for all businesses. This challenge is magnified by the high volume of applicants if the business is labour-intensive, growing, and facing high attrition rates. What is Resume Screening? Hiring the right talent is a challenge for all businesses. This challenge is magnified by the high volume of applicants if the business is labour-intensive, growing, and facing high attrition rates. Machine Learning Project on Resume Screening with Python In this section, I will take you through a Machine Learning project on Resume Screening with Python programming language. I will start this task by importing the necessary Python libraries and the dataset:

Language:Jupyter NotebookStargazers:1Issues:0Issues:0

Social-Links-Dashboard

This is a social link dashboard of my where i show how to build a simple dashboard .

Stargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0

Breast-Cancer-Detection

Breast Cancer Detection With ML

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

Calculator-GUI-with-Python

You have to install Kivy module (pip install kivy )

Language:PythonStargazers:0Issues:0Issues:0

CarServiceApplication

Full-stack Application using MERN stack.

Stargazers:0Issues:0Issues:0

Data-ANZ-Program

This is the virtual internship program by ANZ . It takes 1 weeks to complete.

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

Deep-Neural-Network---Application

Starting September 2020, notebook items in course shells will become Ungraded Labs. Paid learners will be able to access their notebooks in the new Coursera lab environment; Auditors will lose access. We strongly encourage you to download your notebooks if you are auditing this course. You can also upgrade or applying for financial aid to access premium Lab items in your course. For more information, please see this forum link Congratulations! Welcome to the fourth programming exercise of the deep learning specialization. You will now use everything you have learned to build a deep neural network that classifies cat vs. non-cat images. In the second exercise, you used logistic regression to build cat vs. non-cat images and got a 68% accuracy. Your algorithm will now give you an 80% accuracy! By completing this assignment, you will: - Learn how to use all the helper functions you built in the previous assignment to build a model of any structure you want. - Experiment with different model architectures and see how each one behaves. - Recognize that it is always easier to build your helper functions before attempting to build a neural network from scratch.

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

Detecting-Fake-News-with-Python-and-Machine-Learning

What is Fake News? A type of yellow journalism, fake news encapsulates pieces of news that may be hoaxes and is generally spread through social media and other online media. This is often done to further or impose certain ideas and is often achieved with political agendas. Such news items may contain false and/or exaggerated claims, and may end up being viralized by algorithms, and users may end up in a filter bubble. What is a TfidfVectorizer? TF (Term Frequency): The number of times a word appears in a document is its Term Frequency. A higher value means a term appears more often than others, and so, the document is a good match when the term is part of the search terms. IDF (Inverse Document Frequency): Words that occur many times a document, but also occur many times in many others, may be irrelevant. IDF is a measure of how significant a term is in the entire corpus. The TfidfVectorizer converts a collection of raw documents into a matrix of TF-IDF features. What is a PassiveAggressiveClassifier? Passive Aggressive algorithms are online learning algorithms. Such an algorithm remains passive for a correct classification outcome, and turns aggressive in the event of a miscalculation, updating and adjusting. Unlike most other algorithms, it does not converge. Its purpose is to make updates that correct the loss, causing very little change in the norm of the weight vector. Detecting Fake News with Python To build a model to accurately classify a piece of news as REAL or FAKE. About Detecting Fake News with Python This advanced python project of detecting fake news deals with fake and real news. Using sklearn, we build a TfidfVectorizer on our dataset. Then, we initialize a PassiveAggressive Classifier and fit the model. In the end, the accuracy score and the confusion matrix tell us how well our model fares. The fake news Dataset The dataset we’ll use for this python project- we’ll call it news.csv. This dataset has a shape of 7796×4. The first column identifies the news, the second and third are the title and text, and the fourth column has labels denoting whether the news is REAL or FAKE.

Language:Jupyter NotebookStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0

Odoo-Help-Desk

** Help Desk For Odoo **

Stargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0

opencv

Open Source Computer Vision Library

License:Apache-2.0Stargazers:0Issues:0Issues:0

Pin-Pong-Game-With-Kivy

Welcome to the Pong tutorial This tutorial will teach you how to write pong using Kivy. We’ll start with a basic application like the one described in the Create an application and turn it into a playable pong game, describing each step along the way.

Language:PythonStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0

Python-Flask-Blogs

This is a flask based blog whose frontend is created using bootstrap. If you have any questions or suggestions, feel free to open an issue or pull request :)

Language:JavaScriptStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0

Simply-Website

This website design & update all the data in it. This is a landing page of any typical company or any organization.

Language:HTMLStargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

Web-Scraping-Projects-with-Python

Web Scraping is a specialized tool designed to quickly and accurately extract data from any web page. Web scraping tools vary greatly in terms of design and complexity, depending on the project. The need for web scraping is to collect data that can be useful in any way. You must have seen many articles and tutorials on web scraping. Even I have written an article on it. In this article, I will do something different, that is scraping Instagram with Python

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

WhatsApp-Chat-Analysis

WhatsApp is one of the most used messenger applications today with more than 2 Billion users worldwide. It was found that more than 65 billion messages are sent on WhatsApp daily so we can use WhatsApp chats for analyzing our chat with a friend, customer, or a group of people. In this article, I will take you through the task of WhatsApp Chat Analysis with Python.

Language:Jupyter NotebookStargazers:0Issues:0Issues:0