christophe-rannou / sample_vision_api_yolov4

Vision API with YOLOv4 model and Flask

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Sample Vision API ONNX runtime - Docker image

This project provides a simple Flask API which offers object annotation on images with the YOLOv4 API leveraging onnxruntime. A Docker file is also provided to build an image of this API.

Requirements

  • Python 3.7 installed or Docker
  • Download the YOLOv4 ONNX file here and add it to the resources directory

Get started

Start the API

Create a Python 3.7 environment and install the API requirements. Move to the src directory and start the API with:

python api.py

Build the Docker image

Alternatively you can build the docker image:

docker build -t sample-vision-api .

Run the Docker image

Start the image and forward the port 8080.

docker run --rm -p 8080:8080 sample-vision-api:latest

Use the API

Annotate objects in an image

To use the API:

curl -X POST "http://localhost:8080"  -F "file=@kite.jpeg" --output anotated_kite.jpeg

Deploy the API on AI Training

AI Training allows you to deploy Docker images on a managed infrastructure tailored for AI needs, learn more about AI Training.

To deploy your API on AI Training with the CLI you first need to build the Docker image. Once the image is built you need to push it on a remote repository, you can either create a free account on DockerHub or use a registry directly provided with AI Training. Let's use the provided registry, from this step forward it is assumed you are logged in with the CLI (ovhai login).

First get the registry URL:

> ovhai registry list

ID TYPE   URL
   SHARED registry.gra.training.ai.cloud.ovh.net/<some-id>

Then login to the registry:

docker login registry.gra.training.ai.cloud.ovh.net/<some-id>

You will be prompted for credentials, use the same you used to login with the CLI.

Once logged in you need to retag the image we built locally and then push it:

docker tag sample-img-api:latest registry.gra.training.ai.cloud.ovh.net/<some-id>/sample-img-api:latest
docker push registry.gra.training.ai.cloud.ovh.net/<some-id>/sample-img-api:latest

Finally, just run the image:

ovhai job run --cpu 4 registry.gra.training.ai.cloud.ovh.net/<some-id>/sample-img-api:latest

You can list your jobs with ovhai job list. Wait for the job we just submitted to be running. Once running a URL is provided to you to access the API:

> ovhai job list

ID       STATE   AGE IMAGE                                                           JOB_URL
<job-id> RUNNING 3s  registry.gra.training.ai.cloud.ovh.net/<some-id>/sample-img-api https://<job-id>.job.gra.training.ai.cloud.ovh.net

By default the access to the JOB_URL is restricted and you can access with an application token. Create an application token:

> ovhai token create --role read mytoken                                                                                                                                                                                                                                                                          (img-api)
---
appToken: <app-token>
token:
  id: <some-id>
  createdAt: "2021-04-22T15:53:49.993112679Z"
  updatedAt: "2021-04-22T15:53:49.993112679Z"
  name: mytoken
  labelSelector: ""
  version: 1

The token we juste created is unscoped and valid for any job. To know more about application token and how to scope them refer to the documentation.

With the new token you can now call the vision API

export APP_TOKEN=<app-token>
curl -X POST "https://<job-id>.job.gra.training.ai.cloud.ovh.net" -F "file=@kite.jpeg" --output anotated_kite.jpeg -H "Authorization: Bearer $APP_TOKEN"

Input: kite.jpeg

kite.jpeg

Output: anotated_kite.jpeg

anotated_kite.jpeg

Adapt the API

There is two main files in the project:

  • api.py a basic Flask API that reads an image file from the request and return an inferred image as a result
  • inference.py the inference code including preprocessing and postprocessing steps

To add your own vision API you simply need to redefine the infer function in inference.py. This function takes an image input stream that needs to be decoded, and the ONNX runtime session with the trained model loaded.

To create additional routes refer to the Flask documentation

Publication/Attribution

References

The onnx model along with the inference code is directly extracted from ONNX models vision YOLOv4

Contributors

Christophe Rannou

License

MIT License

About

Vision API with YOLOv4 model and Flask

License:MIT License


Languages

Language:Python 95.1%Language:Dockerfile 4.9%