Omari's starred repositories
fully-homomorphic-encryption
An FHE compiler for C++
FuckAdBlock
Detects ad blockers (AdBlock, ...)
OnnxStream
Lightweight inference library for ONNX files, written in C++. It can run Stable Diffusion XL 1.0 on a RPI Zero 2 (or in 298MB of RAM) but also Mistral 7B on desktops and servers. ARM, x86, WASM, RISC-V supported. Accelerated by XNNPACK.
model_server
A scalable inference server for models optimized with OpenVINO™
articulate
A platform for building conversational interfaces with intelligent agents (chatbots)
detect-zoom
Cross Browser Zoom and Pixel Ratio Detector
websocketproxy
WebSocket reverse proxy handler for Go
willow-inference-server
Open source, local, and self-hosted highly optimized language inference server supporting ASR/STT, TTS, and LLM across WebRTC, REST, and WS
not-only-fans
an open source, self-hosted digital content subscription platform like `onlyfans.com` with cryptocurrency payment
yolov4-triton-tensorrt
This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server
onnxruntime-server
ONNX Runtime Server: The ONNX Runtime Server is a server that provides TCP and HTTP/HTTPS REST APIs for ONNX inference.
adult-image-detector
Use deep neural networks and other algos for detect nude images
web-fcm-demo
Simple repo to use Web Face Capture Module and request AI Services
shiny-memory
Simple inference API server for images and videos, written in Rust
5GHackathon
This is the software stack for low-latency real-time video inferencing in a edge server
Banking-System-Front
This is a front end of a full stack project which implements Partial Homomorphic Encryption using Paillier's Encryption on a banking system.
ggrrealsense
Realsense inference (depth/object detection) server
udacity-intel-people-counter-app
Deploy a People Counter App at the Edge For this project, you’ll first find a useful person detection model and convert it to an Intermediate Representation for use with the Model Optimizer. Utilizing the Inference Engine, you'll use the model to perform inference on an input video, and extract useful data concerning the count of people in frame and how long they stay in frame. You'll send this information over MQTT, as well as sending the output frame, in order to view it from a separate UI server over a network.
simple-transcription-client
This is a frontend that streams microphone audio to a backend. The backend passes the audio along to a triton inference server.
InferenceServer
An ML inference server to serve Image classification over HTTP