Trần Gia Bảo's repositories
pupilfirst
A learning management system (LMS) that lets you run an asynchronous online school, where learning is achieved through focused tasks, directed feedback, an iterative workflow, and community interaction.
edx-platform
The Open edX LMS & Studio, powering education sites around the world!
SWE-agent
SWE-agent takes a GitHub issue and tries to automatically fix it, using GPT-4, or your LM of choice. It solves 12.47% of bugs in the SWE-bench evaluation set and takes just 1 minute to run.
tutor
The Docker-based Open edX distribution designed for peace of mind
tutor-indigo
An elegant, customizable theme for Open edX
petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
mediamtx
Ready-to-use SRT / WebRTC / RTSP / RTMP / LL-HLS media server and media proxy that allows to read, publish, proxy, record and playback video and audio streams.
sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
fabric
fabric is an open-source framework for augmenting humans using AI. It provides a modular framework for solving specific problems using a crowdsourced set of AI prompts that can be used anywhere.
TrackNetV3
Implementation of paper - TrackNetV3: Enhancing ShuttleCock Tracking with Augmentations and Trajectory Rectification
Flowise
Drag & drop UI to build your customized LLM flow
pipeless
An open-source computer vision framework to build and deploy apps in minutes
multi-camera-object-detection
This is object detection demo using DLStreamer and OpenVINO to run on Intel® CPU and iGPU
supervision
We write your reusable computer vision tools. 💜
yolov8_onnx_go
YOLOv8 Inference using Go
dify
An Open-Source Assistants API and GPTs alternative. Dify.AI is an LLM application development platform. It integrates the concepts of Backend as a Service and LLMOps, covering the core tech stack required for building generative AI-native applications, including a built-in RAG engine.
llm-food-delivery
Making the food-delivery experience easy for busy folks :)
RTSPtoWebRTC
RTSP to WebRTC use Pion WebRTC
aiortc
WebRTC and ORTC implementation for Python using asyncio
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
inference
A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
go-gst
Gstreamer bindings and utilities for golang
yolo_tracking
BoxMOT: pluggable SOTA tracking modules for segmentation, object detection and pose estimation models
LLMCompiler
LLMCompiler: An LLM Compiler for Parallel Function Calling