DanielOX / mlops-project

An end-to-end machine learning (mlops) project

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MLOPS PROJECT (End to End)

This is project for the MLOps ZoomCamp course here sponsored by DataTalks.Club

Problem

This is a simple end-to-end mlops project which takes data from capital bikeshare and transforms it with machine learning pipelines from training, model tracking and experimenting with mlflow, ochestration with prefect as workflow tool to deploying the model as a web service.

The project runs locally and uses AWS S3 buckets to store model artifacts during model tracking and experimenting with mlflow.

Dataset

The chosen dataset for this project is the Capital Bikeshare Data

Improvements

In the future I hope to improve the project by having the entire infrastructure moved to cloud using AWS cloud(managing the infrastructure with iac tools such as terraform), have model deployment as either batch or streaming with AWS lambda and kinesis streams, a comprehesive model monitoring.

Project Setup

Clone the project from the repository

git clone https://github.com/PatrickCmd/mlops-project.git

Change to mlops-project directory

cd mlops-project

Setup and install project dependencies

make setup

Add your current directory to python path

export PYTHONPATH="${PYTHONPATH}:${PWD}"

Start Local Prefect Server

In a new terminal window or tab run the command below to start prefect orion server

prefect orion start

Start Local Mlflow Server

The mlflow points to S3 bucket for storing model artifacts and uses sqlite database as the backend end store

Create an S3 bucket and export the bucket name as an environment variable as shown below

In a new terminal window or tab run the following commands below

export S3_BUCKET_NAME=bucket_name

Start the mlflow server

mlflow server --backend-store-uri sqlite:///mlflow.db --default-artifact-root s3://${S3_BUCKET_NAME} --artifacts-destination s3://${S3_BUCKET_NAME}

Running model training and model registery staging pipelines locally

Model training

python main.py --train_file 202204-capitalbikeshare-tripdata.zip --valid_file 202205-capitalbikeshare-tripdata.zip

model-tracking

Flow runs

Register and Stage model

python stage.py --tracking_uri http://127.0.0.1:5000 --experiment_name valid_experiment_name

Register model

Create scheduled deployments and agent workers to start the deployments

prefect deployment create deployments.py

deployments

Create work queues

prefect work-queue create -t "ml-training" ml-training-queue
prefect work-queue create -t "ml-staging" ml-staging-queue

work queue

training queue

staging queue

Run deployments locally to schedule pipeline flows

prefect deployment run mlflow-training/deploy-mlflow-training
prefect deployment run mlflow-staging/deploy-mlflow-staging

scheduled_flow_runs

Deploy model as a web service locally

Change to webservice directory and follow the instructions here

About

An end-to-end machine learning (mlops) project


Languages

Language:Jupyter Notebook 97.8%Language:Python 2.0%Language:Makefile 0.1%Language:Dockerfile 0.0%