BrianThomasRoss / labs-ds-starter

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Labs DS starter

This template has starter code to deploy an API for your machine learning model and data visualizations.

You can see the template deployed on AWS as-is.

This diagram shows two different ways to use Python web frameworks. Both ways are good! The first way is what you learned in DS Unit 3, with Flask. The second way is more common in Build Weeks & Labs.

Instead of Flask, we'll use FastAPI. It's similar, but faster, with automatic interactive docs. For more comparison, see FastAPI for Flask Users.

You'll build and deploy a Data Science API. You'll work cross-functionally with your Web teammates to connect your API to a full-stack web app!

Tech stack

  • AWS Elastic Beanstalk: Platform as a service, hosts your API.
  • Docker: Containers, for reproducible environments.
  • FastAPI: Web framework. Like Flask, but faster, with automatic interactive docs.
  • Flake8: Linter, enforces PEP8 style guide.
  • Plotly: Visualization library, for Python & JavaScript.
  • Pytest: Testing framework, runs your unit tests.

Getting started

Create a new repository from this template.

Clone the repo

git clone https://github.com/YOUR-GITHUB-USERNAME/YOUR-REPO-NAME.git

cd YOUR-REPO-NAME

Build the Docker image

docker-compose build

Run the Docker image

docker-compose up

image

You'll see your API documentation:

  • Your app's title, "DS API"
  • Your description, "Lorem ipsum"
  • An endpoint for POST requests, /predict
  • An endpoint for GET requests, /vis/{statecode}

Click the /predict endpoint's green button.

image

You'll see the endpoint's documentation, including:

  • Your function's docstring, """Make random baseline predictions for classification problem."""
  • Request body example, as JSON (like a Python dictionary)
  • A button, "Try it out"

Click the "Try it out" button.

image

The request body becomes editable.

Click the "Execute" button. Then scroll down.

image

You'll see the server response, including:

  • Code 200, which means the request was successful.
  • The response body, as JSON, with random baseline predictions for a classification problem.

Your job is to replace these random predictions with real predictions from your model. Use this starter code and documentation to deploy your model as an API!

File structure

project
└── app
    ├── __init__.py
    ├── main.py
    ├── api
    │   ├── __init__.py
    │   ├── predict.py
    │   └── viz.py    
    └── tests
        ├── __init__.py
        ├── test_main.py
        ├── test_predict.py
        └── test_viz.py

app/main.py is where you edit your app's title and description, which are displayed at the top of the your automatically generated documentation. This file also configures "Cross-Origin Resource Sharing", which you shouldn't need to edit.

app/api/predict.py defines the Machine Learning endpoint. /predict accepts POST requests and responds with random predictions. In a notebook, train your model and pickle it. Then in this source code file, unpickle your model and edit the predict function to return real predictions.

When your API receives a POST request, FastAPI automatically parses and validates the request body JSON, using the Item class attributes and functions. Edit this class so it's consistent with the column names and types from your training dataframe.

app/api/viz.py defines the Visualization endpoint. Currently /viz/{statecode} accepts GET requests where {statecode} is a 2 character US state postal code, and responds with a Plotly figure of the state's unemployment rate, as a JSON string. Create your own Plotly visualizations in notebooks. Then add your code to this source code file. Your web developer teammates can use react-plotly.js to show the visualizations.

react-plotly.js animation

app/tests/test_*.py is where you edit your pytest unit tests.

Deploy to AWS

Get your AWS access keys.

Install AWS Command Line Interface.

Configure AWS CLI:

aws configure

Install AWS Elastic Beanstalk CLI:

pip install pipx
pipx install awsebcli

Follow AWS EB docs:

Use Docker to build the image locally, test it locally, then push it to Docker Hub.

docker build -f project/Dockerfile -t YOUR-DOCKER-HUB-ID/YOUR-IMAGE-NAME ./project

docker login

docker push YOUR-DOCKER-HUB-ID/YOUR-IMAGE-NAME

Edit the image name in the Dockerrun.aws.json file. Replace the placeholder YOUR-DOCKER-HUB-ID/YOUR-IMAGE-NAME with your real values.

Use the EB CLI:

git add --all

git commit -m "Your commit message"

eb init -p docker YOUR-APP-NAME --region us-east-1

eb create YOUR-APP-NAME

eb open

To redeploy:

  • git commit ...
  • docker build ...
  • docker push ...
  • eb deploy
  • eb open

Machine learning, step-by-step

Follow the getting started instructions

Edit app/main.py to add your API title and description

app = FastAPI(
    title='House Price DS API',
    description='Predict house prices in California',
    version='0.1',
    docs_url='/',
)

Edit app/api/predict.py to add a docstring for your predict function and return a naive baseline.

@router.post('/predict')
async def predict(item: Item):
    """Predict house prices in California."""
    y_pred = 200000
    return {'predicted_price': y_pred}

In a notebook, explore your data. Make an educated guess of what features you'll use.

import pandas as pd
from sklearn.datasets import fetch_california_housing

# Load data
california = fetch_california_housing()
print(california.DESCR)
X = pd.DataFrame(california.data, columns=california.feature_names)
y = california.target

# Rename columns
X.columns = X.columns.str.lower()
X = X.rename(columns={'avebedrms': 'bedrooms', 'averooms': 'total_rooms'})

# Explore descriptive stats
X.describe()
# Use these 3 features
features = ['bedrooms', 'total_rooms', 'house_age']

Edit the class in app/api/predict.py to use your features.

class House(BaseModel):
    """Use this data model to parse the request body JSON."""
    bedrooms: int
    total_rooms: float
    house_age: float

    def to_df(self):
        """Convert pydantic object to pandas dataframe with 1 row."""
        return pd.DataFrame([dict(self)])

@router.post('/predict')
async def predict(house: House):

Deploy your work-in-progress to AWS. Now your web teammates can make POST requests to your API endpoint.

In a notebook, train your pipeline and pickle it. See these docs:

Get version numbers for every package you used in your pipeline. Install the exact versions of these packages in your virtual environment.

Edit app/api/predict.py to unpickle your model and use it in your predict function.

Now you are ready to re-deploy!

About

License:MIT License


Languages

Language:Python 95.6%Language:Dockerfile 4.4%