There are 1 repository under ai-inference topic.
The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!
Extension for Scikit-learn is a seamless way to speed up your Scikit-learn application
oneAPI Data Analytics Library (oneDAL)
The easiest way to use Machine Learning. Mix and match underlying ML libraries and data set sources. Generate new datasets or modify existing ones with ease.
High-Performance AI-Native Web Server — built in C & Assembly for ultra-fast AI inference and streaming.
Client library to interact with various APIs used within Philips in a simple and uniform way
Unity TTS plugin: Piper neural synthesis + OpenJTalk Japanese + Unity AI Inference Engine. Windows/Mac/Linux/Android/iOS ready. High-quality voices for games & apps.
Local LLM Inference Library
Customed version of Google's tflite-micro
A powerful, faster, scalable full-stack boilerplace for AI inference using Node.js, Python, Redis, and Docker
Arbitrary Numbers
🌱 Intelligent IoT greenhouse fan controller using AI/ML for automated climate control. Features ESP32 + DHT22 sensors, real-time Firebase integration, Flutter mobile app with TensorFlow Lite on-device inference, and Wokwi simulation. Complete full-stack solution demonstrating IoT + AI integration.
A personal demo project for Flutter + ONNX Runtime integration. Not related to any company work.A comprehensive on-device face recognition SDK for Flutter
UniUi uses AI to allow you to talk directly to your system.
AI Inference in GitHub Actions demo
Community governance, contribution rules, and roadmap for the Edge AI open platform.
Image AI Training And Inference For Every One, For Free
Nimble OKE — deploy NIM on Oracle OKE in minutes for under $50. From zero → smoke test → cleanup, Nimble OKE makes production-ready AI inference fast, affordable, and repeatable.
Citadel AI OS – Enterprise AI Runtime Environment for Inference, Agents, and Business Operations