There are 3 repositories under inference-api topic.
A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
This repository allows you to get started with a gui based training a State-of-the-art Deep Learning model with little to no configuration needed! NoCode training with TensorFlow has never been so easy.
The simplest way to serve AI/ML models in production
The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) and ready to deploy on Qualcomm® devices.
A Beautiful Flask Web API for Yolov7 (and custom) models
Train and predict your model on pre-trained deep learning models through the GUI (web app). No more many parameters, no more data preprocessing.
This repository allows you to get started with training a State-of-the-art Deep Learning model with little to no configuration needed! You provide your labeled dataset and you can start the training right away. You can even test your model with our built-in Inference REST API. Training classification models with GluonCV has never been so easy.
This is a repository for an image classification inference API using the Gluoncv framework. The inference REST API works on CPU/GPU. It's supported on Windows and Linux Operating systems. Models trained using our Gluoncv Classification training repository can be deployed in this API. Several models can be loaded and used at the same time.
Eternal is an experimental platform for machine learning models and workflows.
🤗 Hugging Face Inference Client written in Go
Typescript wrapper for the Hugging Face Inference API.
the small distributed language model toolkit; fine-tune state-of-the-art LLMs anywhere, rapidly
Describing How to Enable OpenVINO Execution Provider for ONNX Runtime
An open source framework for Retrieval-Augmented System (RAG) uses semantic search helps to retrieve the expected results and generate human readable conversational response with the help of LLM (Large Language Model).
Tool for test diferents large language models without code.
A Non-Official HuggingFace Rest Client for Unity (UPM)
REST APIs for StableDiffusion. Inferencing support on AzureML
A Node.js backend that exposes a Typescript implementation of the deCheem inference engine.
A networked inference server for Whisper so you don't have to keep waiting for the audio model to reload for the x-hunderdth time.
A message queue based server architecture to asynchronously handle resource-intensive tasks (e.g., ML inference)
Text components powering LLMs & SLMs for geniusrise framework
Practice for Machine Learning in Production course
Computer VIsion API built using FastAPI and pretrained models converted to ONNX format
A simple node.js example that generates an image using StableDiffusion via Hugging Face Inference API.
Computer Vision API V2 - FastAPI & ONNX Models
Monitor Lambda ML inference with CloudWatch Dashboard using AWS CDK (Python)
A Rust Client For groq Inference API
MLDrop model serving for Pytorch
The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) and ready to deploy on Qualcomm® devices.
Text to image generation with stable diffusion xl model powered by hugging face inference api