Chengyuan Li's repositories
Single-Cycle-CPU-10
VerilogHDL单周期CPU(支持10条指令)
50projects50days
50+ mini web projects using HTML, CSS & JS
hexpod-robot
UC Berkeley EECS249 Project
weibo-crawler
Crawl weibo user‘s information using selenium
EAGLE
EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty
GCC
GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training
HelloworldBlockchain
HelloworldBlockchain项目架构清晰,文档齐全,中文注释,可读性高,为区块链初学者学习研究而生。HelloworldBlockchain是一个Helloworld级别的区块链公链系统。HelloworldBlockchain是一个Helloworld级别的数字货币项目。HelloworldBlockchain开发调试简单,下载源码,导入idea(eclipse),无需任何配置,找到启动类文件com.xingkaichun.helloworldblockchain.explorer.HelloWorldBlockchainExplorerApplication,右键运行,即可启动项目。
kuboard-press
Kuboard 是基于 Kubernetes 的微服务管理界面。同时提供 Kubernetes 免费中文教程,入门教程,最新版本的 Kubernetes v1.20 安装手册,(k8s install) 在线答疑,持续更新。
MegCC
MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器
Model-Compression-Papers
Papers for deep neural network compression and acceleration
models
A collection of pre-trained, state-of-the-art models in the ONNX format
MVision
机器人视觉 移动机器人 VS-SLAM ORB-SLAM2 深度学习目标检测 yolov3 行为检测 opencv PCL 机器学习 无人驾驶
ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
nft-auction
NFT Auction application lets user to create NFTs and auction them off to other users in a marketplace model using Hyperledger Fabric
nft-drop-starter-project
my first NFT
openvino_notebooks
📚 A collection of Jupyter notebooks for learning and experimenting with OpenVINO 👓
PocketFlow
An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
pytorch-YOLOv4
PyTorch ,ONNX and TensorRT implementation of YOLOv4
pytorch_geometric
Geometric Deep Learning Extension Library for PyTorch
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
tensorrt_backend
The Triton backend for TensorRT.
tensorrtx
Implementation of popular deep learning networks with TensorRT network definition API
triton-tensorrt-CRAFT-pytorch
Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT. Supported model format for Triton inference: TensorRT engine, Torchscript, ONNX
yolov4-triton-tensorrt
This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server