dnguyenngoc / ml-models-in-production

This repo gives an introduction to how to make full working example to serve your model using asynchronous Celery tasks and FastAPI. πŸ”₯ πŸ”₯ πŸ”₯ πŸ”₯

Home Page:https://viblo.asia/p/serving-ml-models-in-production-with-fastapi-and-celery-924lJROmlPM

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Ml Models in Production

python Celery rabbitmq redis react fastapi NPM Tensorflow

This repo gives an introduction to how to make full working example to serve your model using asynchronous Celery tasks and FastAPI. This post walks through a working example for serving a ML model using Celery and FastAPI. All code can be found in this repository. We won’t specifically discuss the ML model used for this example however it was trained using coco dataset with 80 object class like cat, dog, bird ... more detail here Coco Dataset. The model have been train with tensorflow Tensorflow

Contents

Screenshots & Gifs

View System

Architecture

Demo

1. Install docker and docker-compose

https://www.docker.com/

2. Pull git repo

git clone https://github.com/apot-group/ml-models-in-production.git

3. Start Server

cd ml-models-in-production && docker-compose up

Service URL
API docs http://localhost/api/docs
Demo Web http://localhost

go to Demo web http://localhost and test with your picture.

Test

Contact Us

About

This repo gives an introduction to how to make full working example to serve your model using asynchronous Celery tasks and FastAPI. πŸ”₯ πŸ”₯ πŸ”₯ πŸ”₯

https://viblo.asia/p/serving-ml-models-in-production-with-fastapi-and-celery-924lJROmlPM


Languages

Language:Python 66.2%Language:JavaScript 28.2%Language:SCSS 3.7%Language:Dockerfile 0.9%Language:HTML 0.7%Language:CSS 0.2%Language:Shell 0.2%