東's repositories
TRT_PRO_LEARN
对 tensorRT_Pro 开源项目理解 :tm: :cyclone:
YOLOv8_deploy
YOLOv8 C++ DET、SEG、POSE TENSORRT 推理库,便于学习开发拓展与工作中实际部署 :tv:
TensorRT_dong
NVIDIA® TensorRT™ NOTE :tm: 学习记录以及相关理解 :rocket:
C_plus_plus
C++ NOTE :tm:
FFmpeg_dong
视频编解码 NOTE :tm:
Ncnn_deploy
手机端极致优化的高性能神经网络前向计算框架 ncnn :sparkles:
Pybind11_Lib
pybind11 — Seamless operability between C++11 and Python :city_sunrise:
chameleon
AIGC NOTE 系统性记录:从0到1 :alien:
fastllm_dong
纯c++的全平台llm加速库,支持python调用,chatglm-6B级模型单卡可达10000+token / s,支持glm, llama, moss基座,手机端流畅运行
Lidar_AI_Solution
A project demonstrating Lidar related AI solutions, including three GPU accelerated Lidar/camera DL networks (PointPillars, CenterPoint, BEVFusion) and the related libs (cuPCL, 3D SparseConvolution, YUV2RGB, cuOSD,).
nvinfer_Pro
C++ library based on tensorrt integration :100:
TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
tensorRT_Pro
C++ library based on tensorrt integration
ComfyUI
The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
DeepStream-Yolo
NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
infer
A new tensorrt integrate.
lite.ai.toolkit
🛠 A lite C++ toolkit of awesome AI models with ONNXRuntime, NCNN, MNN and TNN. YOLOv5, YOLOX, YOLOP, YOLOv6, YOLOR, MODNet, YOLOX, YOLOv7, YOLOv8. MNN, NCNN, TNN, ONNXRuntime.
opencv-mobile
The minimal opencv for Android, iOS, ARM Linux, Windows, Linux, MacOS, WebAssembly
sophon_dong
算能芯片部署优化demo
video_pipe_c_dong
a plugin-oriented framework for video structured. 国产程序员请加微信zhzhi78拉群交流。
Zhouwenwang
周文王神算大模型zhouwenwang,周易八字命理姓名推演 :curly_loop: