Omari (xchangebit)

xchangebit

Geek Repo

Company:@lampkicking @Trumin @getyoti

Location:London

Home Page:https://www.yoti.com

Github PK Tool:Github PK Tool


Organizations
getyoti
lampkicking

Omari's starred repositories

server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.

Language:PythonLicense:BSD-3-ClauseStargazers:8130Issues:139Issues:3736

nlp.js

An NLP library for building bots, with entity extraction, sentiment analysis, automatic language identify, and so more

Language:JavaScriptLicense:MITStargazers:6237Issues:108Issues:429

fully-homomorphic-encryption

An FHE compiler for C++

Language:C++License:Apache-2.0Stargazers:3512Issues:89Issues:37

FuckAdBlock

Detects ad blockers (AdBlock, ...)

Language:JavaScriptLicense:MITStargazers:1895Issues:82Issues:74

cortex

Drop-in, local AI alternative to the OpenAI stack. Multi-engine (llama.cpp, TensorRT-LLM, ONNX). Powers 👋 Jan

Language:C++License:Apache-2.0Stargazers:1885Issues:14Issues:411

OnnxStream

Lightweight inference library for ONNX files, written in C++. It can run Stable Diffusion XL 1.0 on a RPI Zero 2 (or in 298MB of RAM) but also Mistral 7B on desktops and servers. ARM, x86, WASM, RISC-V supported. Accelerated by XNNPACK.

Language:C++License:NOASSERTIONStargazers:1833Issues:28Issues:74

pipeless

An open-source computer vision framework to build and deploy apps in minutes

Language:RustLicense:Apache-2.0Stargazers:707Issues:5Issues:36

model_server

A scalable inference server for models optimized with OpenVINO™

Language:C++License:Apache-2.0Stargazers:658Issues:31Issues:158

articulate

A platform for building conversational interfaces with intelligent agents (chatbots)

Language:JavaScriptLicense:Apache-2.0Stargazers:598Issues:41Issues:1059

detect-zoom

Cross Browser Zoom and Pixel Ratio Detector

websocketproxy

WebSocket reverse proxy handler for Go

Language:GoLicense:MITStargazers:427Issues:19Issues:16

willow-inference-server

Open source, local, and self-hosted highly optimized language inference server supporting ASR/STT, TTS, and LLM across WebRTC, REST, and WS

Language:PythonLicense:Apache-2.0Stargazers:375Issues:17Issues:83

not-only-fans

an open source, self-hosted digital content subscription platform like `onlyfans.com` with cryptocurrency payment

Language:HTMLLicense:GPL-2.0Stargazers:373Issues:12Issues:4

yolov4-triton-tensorrt

This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server

Language:C++License:NOASSERTIONStargazers:278Issues:15Issues:63

ip-index

A fast offline IP lookup library. Detects VPN/hosting.

Language:JavaScriptLicense:GPL-3.0Stargazers:210Issues:7Issues:24

onnxruntime-server

ONNX Runtime Server: The ONNX Runtime Server is a server that provides TCP and HTTP/HTTPS REST APIs for ONNX inference.

Language:C++License:MITStargazers:115Issues:2Issues:2

yolov3

Go implementation of the yolo v3 object detection system

Language:GoLicense:MITStargazers:76Issues:1Issues:3

adult-image-detector

Use deep neural networks and other algos for detect nude images

goinfer

Lightweight inference server for local language models

Language:GoLicense:MITStargazers:5Issues:4Issues:3

web-fcm-demo

Simple repo to use Web Face Capture Module and request AI Services

Language:JavaScriptStargazers:4Issues:5Issues:0

shiny-memory

Simple inference API server for images and videos, written in Rust

Language:RustStargazers:2Issues:2Issues:0

5GHackathon

This is the software stack for low-latency real-time video inferencing in a edge server

Language:JavaScriptStargazers:1Issues:0Issues:0

Banking-System-Front

This is a front end of a full stack project which implements Partial Homomorphic Encryption using Paillier's Encryption on a banking system.

Language:TypeScriptStargazers:1Issues:1Issues:0

ggrrealsense

Realsense inference (depth/object detection) server

Language:JavaScriptStargazers:1Issues:2Issues:0

udacity-intel-people-counter-app

Deploy a People Counter App at the Edge For this project, you’ll first find a useful person detection model and convert it to an Intermediate Representation for use with the Model Optimizer. Utilizing the Inference Engine, you'll use the model to perform inference on an input video, and extract useful data concerning the count of people in frame and how long they stay in frame. You'll send this information over MQTT, as well as sending the output frame, in order to view it from a separate UI server over a network.

Language:JavaScriptStargazers:1Issues:0Issues:0

simple-transcription-client

This is a frontend that streams microphone audio to a backend. The backend passes the audio along to a triton inference server.

Language:JavaScriptStargazers:1Issues:0Issues:0

InferenceServer

An ML inference server to serve Image classification over HTTP

Language:JavaScriptLicense:MITStargazers:1Issues:2Issues:7