ganliqiang's starred repositories
TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
llama-cpp-python
Python bindings for llama.cpp
bitsandbytes
Accessible large language models via k-bit quantization for PyTorch.
inference
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
Depth-Anything-V2
Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation
dd-ml-segmentation-benchmark
DroneDeploy Machine Learning Segmentation Benchmark
drone-images-semantic-segmentation
Multi-class semantic segmentation performed on "Semantic Drone Dataset."
dual-teacher
Official code for the NeurIPS 2023 paper "Switching Temporary Teachers for Semi-Supervised Semantic Segmentation"