lapaniku / cortex

Deploy machine learning models to production

Home Page:https://cortex.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool


installdocumentationexamplescommunity

Deploy machine learning models to production

Cortex is an open source platform for deploying, managing, and scaling machine learning in production.


Model serving infrastructure

  • Supports deploying TensorFlow, PyTorch, sklearn and other models as realtime or batch APIs.
  • Ensures high availability with availability zones and automated instance restarts.
  • Runs inference on spot instances with on-demand backups.
  • Autoscales to handle production workloads.

Configure Cortex

# cluster.yaml

region: us-east-1
instance_type: g4dn.xlarge
min_instances: 10
max_instances: 100
spot: true

Spin up Cortex on your AWS account

$ cortex cluster up --config cluster.yaml

○ configuring autoscaling ✓
○ configuring networking ✓
○ configuring logging ✓

cortex is ready!

Reproducible deployments

  • Package dependencies, code, and configuration for reproducible deployments.
  • Configure compute, autoscaling, and networking for each API.
  • Integrate with your data science platform or CI/CD system.
  • Test locally before deploying to your cluster.

Implement a predictor

# predictor.py

from transformers import pipeline

class PythonPredictor:
  def __init__(self, config):
    self.model = pipeline(task="text-generation")

  def predict(self, payload):
    return self.model(payload["text"])[0]

Configure an API

api_spec = {
  "name": "text-generator",
  "kind": "RealtimeAPI",
  "predictor": {
    "type": "python",
    "path": "predictor.py"
  },
  "compute": {
    "gpu": 1,
    "mem": "8Gi",
  },
  "autoscaling": {
    "min_replicas": 1,
    "max_replicas": 10
  },
  "networking": {
    "api_gateway": "public"
  }
}

Scalable machine learning APIs

  • Scale to handle production workloads with request-based autoscaling.
  • Stream performance metrics and logs to any monitoring tool.
  • Serve many models efficiently with multi model caching.
  • Configure traffic splitting for A/B testing.
  • Update APIs without downtime.

Deploy to your cluster

import cortex

cx = cortex.client("aws")
cx.deploy(api_spec, project_dir=".")

# creating https://example.com/text-generator

Consume your API

import requests

endpoint = "https://example.com/text-generator"
payload = {"text": "hello world"}
prediction = requests.post(endpoint, payload)

Get started

pip install cortex

See the installation guide for next steps.

About

Deploy machine learning models to production

https://cortex.dev

License:Apache License 2.0


Languages

Language:Go 80.2%Language:Python 14.4%Language:Shell 3.5%Language:Dockerfile 1.0%Language:HTML 0.6%Language:Makefile 0.3%