There are 10 repositories under serving topic.
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
A flexible, high-performance serving system for machine learning models
AI + Data, online. https://vespa.ai
In this repository, I will share some useful notes and references about deploying deep learning-based models in production.
An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio 20+ main stream scenarios and 150+ SOTA models with end-to-end optimization, multi-platform and multi-framework support.
Database system for AI-powered apps
TensorFlow template application for deep learning
A comprehensive guide to building RAG-based LLM applications for production.
RayLLM - LLMs on Ray
A flexible, high-performance carrier for machine learning models(『飞桨』服务化部署框架)
Generic and easy-to-use serving service for machine learning models
A scalable inference server for models optimized with OpenVINO™
Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
ML pipeline orchestration and model deployments on Kubernetes.
A high-performance inference system for large language models, designed for production environments.
MLOps Platform
Blockchain Search with GraphQL APIs
MLModelCI is a complete MLOps platform for managing, converting, profiling, and deploying MLaaS (Machine Learning-as-a-Service), bridging the gap between current ML training and serving systems.
A universal scalable machine learning model deployment solution
bring keras-models to production with tensorflow-serving and nodejs + docker :pizza:
ClearML - Model-Serving Orchestration and Repository Solution
TensorFlow Serving ARM - A project for cross-compiling TensorFlow Serving targeting popular ARM cores
Deploy DL/ ML inference pipelines with minimal extra code.
A collection of model deployment library and technique.
Deploy AI models at scale. High-throughput serving engine for AI/ML models that uses the latest state-of-the-art model deployment techniques.
This code is used to build & run a Docker container for performing predictions against a Spark ML Pipeline.