cogfor / cortex

Deploy machine learning in production

Home Page:https://cortex.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool


installdocumentationexampleswe're hiringchat with us


Model serving at scale

Deploy

  • Deploy TensorFlow, PyTorch, ONNX, scikit-learn, and other models.
  • Define preprocessing and postprocessing steps in Python.
  • Configure APIs as realtime or batch.
  • Deploy multiple models per API.

Manage

  • Monitor API performance and track predictions.
  • Update APIs with no downtime.
  • Stream logs from APIs.
  • Perform A/B tests.

Scale

  • Test locally, scale on your AWS account.
  • Autoscale to handle production traffic.
  • Reduce cost with spot instances.

How it works

Write APIs in Python

Define any real-time or batch inference pipeline as simple Python APIs, regardless of framework.

# predictor.py

from transformers import pipeline

class PythonPredictor:
  def __init__(self, config):
    self.model = pipeline(task="text-generation")

  def predict(self, payload):
    return self.model(payload["text"])[0]

Configure infrastructure in YAML

Configure autoscaling, monitoring, compute resources, update strategies, and more.

# cortex.yaml

- name: text-generator
  predictor:
    path: predictor.py
  networking:
    api_gateway: public
  compute:
    gpu: 1
  autoscaling:
    min_replicas: 3

Scale to handle production traffic

Handle traffic with request-based autoscaling. Minimize spend with spot instances and multi-model APIs.

$ cortex get text-generator

endpoint: https://example.com/text-generator

status   last-update   replicas   requests   latency
live     10h           10         100000     100ms

Integrate with your stack

Integrate Cortex with any data science platform and CI/CD tooling, without changing your workflow.

# predictor.py

import tensorflow
import torch
import transformers
import mlflow

...

Run on your AWS account

Run Cortex on your AWS account (GCP support is coming soon), maintaining control over resource utilization and data access.

# cluster.yaml

region: us-west-2
instance_type: g4dn.xlarge
spot: true
min_instances: 1
max_instances: 5

Focus on machine learning, not DevOps

You don't need to bring your own cluster or containerize your models, Cortex automates your cloud infrastructure.

$ cortex cluster up

confguring networking ...
configuring logging ...
configuring metrics ...
configuring autoscaling ...

cortex is ready!

Get started

bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.20/get-cli.sh)"

See our installation guide, then deploy one of our examples or bring your own models to build realtime APIs and batch APIs.

About

Deploy machine learning in production

https://cortex.dev

License:Apache License 2.0


Languages

Language:Go 88.2%Language:Python 6.3%Language:Shell 3.7%Language:Dockerfile 1.0%Language:HTML 0.6%Language:Makefile 0.3%