1vn / cortex

Deploy machine learning models in production

Home Page:https://cortex.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool


installdocsexampleswe're hiringemail uschat with us


Cortex is a machine learning deployment platform that you can self-host on AWS. It combines TensorFlow Serving, ONNX Runtime, and Flask into a single tool that takes models from S3 and deploys them as JSON prediction APIs. It also uses Docker and Kubernetes behind the scenes to autoscale, run rolling updates, and support CPU and GPU inference.


How it works

Define your deployment using declarative configuration:

# cortex.yaml

- kind: api
  name: my-api
  model: s3://my-bucket/my-model.onnx
  request_handler: handler.py
  compute:
    gpu: 1

Customize request handling:

# handler.py

# Load data for preprocessing or postprocessing. For example:
labels = download_labels_from_s3()


def pre_inference(sample, metadata):
  # Python code


def post_inference(prediction, metadata):
  # Python code

Deploy to AWS:

$ cortex deploy

Deploying ...
http://***.amazonaws.com/my-api  # Your API is ready!

Serve real-time predictions via autoscaling JSON APIs:

$ curl http://***.amazonaws.com/my-api -d '{"a": 1, "b": 2, "c": 3}'

{ prediction: "def" }

Hosting Cortex on AWS

# Download the install script
$ curl -O https://raw.githubusercontent.com/cortexlabs/cortex/0.7/cortex.sh && chmod +x cortex.sh

# Install the Cortex CLI on your machine
$ ./cortex.sh install cli

# Set your AWS credentials
$ export AWS_ACCESS_KEY_ID=***
$ export AWS_SECRET_ACCESS_KEY=***

# Configure AWS instance settings
$ export CORTEX_NODE_TYPE="p2.xlarge"
$ export CORTEX_NODES_MIN="1"
$ export CORTEX_NODES_MAX="3"

# Provision infrastructure on AWS and install Cortex
$ ./cortex.sh install

Key features

  • Minimal declarative configuration: Deployments can be defined in a single cortex.yaml file.

  • Autoscaling: Cortex can automatically scale APIs to handle production workloads.

  • Multi framework: Cortex supports TensorFlow, Keras, PyTorch, Scikit-learn, XGBoost, and more.

  • Rolling updates: Cortex updates deployed APIs without any downtime.

  • Log streaming: Cortex streams logs from your deployed models to your CLI.

  • Prediction Monitoring: Cortex can monitor network metrics and track predictions.

  • CPU / GPU support: Cortex can run inference on CPU or GPU infrastructure.


Examples

About

Deploy machine learning models in production

https://cortex.dev

License:Apache License 2.0


Languages

Language:Go 82.0%Language:Python 9.7%Language:Shell 6.8%Language:Dockerfile 0.9%Language:Makefile 0.6%