iampawanpoojary / disaster_response

disaster response data engineering pipeline

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Disaster Response Pipeline Project

Intro Pic

Table of Contents

  1. Description
  2. Getting Started
    1. Dependencies
    2. Installing
    3. Executing Program
    4. Additional Material
  3. Authors
  4. License
  5. Acknowledgement
  6. Screenshots

Description

This Project is part of Data Science Nanodegree Program by Udacity in collaboration with Figure Eight. The initial dataset contains pre-labelled tweet and messages from real-life disaster. The aim of the project is to build a Natural Language Processing tool that categorize messages.

The Project is divided in the following Sections:

  1. Data Processing, ETL Pipeline to extract data from source, clean data and save them in a proper databse structure
  2. Machine Learning Pipeline to train a model able to classify text message in categories
  3. Web App to show model results in real time.

Getting Started

Dependencies

  • Python 3.5+ (I used Python 3.6.5)
  • Machine Learning Libraries: NumPy, SciPy, Pandas, Sciki-Learn
  • Natural Language Process Libraries: NLTK
  • SQLlite Database Libraqries: SQLalchemy
  • Web App and Data Visualization: Flask, Plotly

Executing Program:

  1. Run the following commands in the project's root directory to set up your database and model.

    • To run ETL pipeline that cleans data and stores in database python data/process_data.py data/disaster_messages.csv data/disaster_categories.csv data/DisasterResponse.db
    • To run ML pipeline that trains classifier and saves python models/train_classifier.py data/DisasterResponse.db models/classifier.pkl
  2. Run the following command in the app's directory to run your web app. python run.py

  3. Go to http://0.0.0.0:3001/

Additional Material

In the data and models folder you can find two jupyter notebook that will help you understand how the model works step by step:

  1. ETL Preparation Notebook: learn everything about the implemented ETL pipeline
  2. ML Pipeline Preparation Notebook: look at the Machine Learning Pipeline developed with NLTK and Scikit-Learn

You can use ML Pipeline Preparation Notebook to re-train the model or tune it through a dedicated Grid Search section. In this case, it is warmly recommended to use a Linux machine to run Grid Search, especially if you are going to try a large combination of parameters. Using a standard desktop/laptop (4 CPUs, RAM 8Gb or above) it may take several hours to complete.

Authors

License

License: MIT

Acknowledgements

  • Udacity for providing such a complete Data Science Nanodegree Program
  • Figure Eight for providing messages dataset to train my model

Screenshots

  1. This is an example of a message you can type to test Machine Learning model performance

Sample Input

  1. After clicking Classify Message, you can see the categories which the message belongs to highlighted in green

Sample Output

  1. The main page shows some graphs about training dataset, provided by Figure Eight

Main Page

About

disaster response data engineering pipeline


Languages

Language:Jupyter Notebook 85.7%Language:Python 11.6%Language:HTML 2.6%