dzy (666DZY666)

666DZY666

Geek Repo

Company:Peking University

Location:Beijing

Github PK Tool:Github PK Tool

dzy's starred repositories

Python-100-Days

Python - 100天从新手到大师

python-patterns

A collection of design patterns/idioms in Python

pytorch-tutorial

PyTorch Tutorial for Deep Learning Researchers

Language:PythonLicense:MITStargazers:29571Issues:625Issues:178

tuning_playbook

A playbook for systematically maximizing the performance of deep learning models.

examples

A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.

Language:PythonLicense:BSD-3-ClauseStargazers:22077Issues:397Issues:636

mlc-llm

Universal LLM Deployment Engine with ML Compilation

Language:PythonLicense:Apache-2.0Stargazers:17916Issues:168Issues:1223

CenterNet

Object detection, 3D detection, and pose estimation using center point detection:

Language:PythonLicense:MITStargazers:7211Issues:113Issues:1001

pytorch-examples

Simple examples to introduce PyTorch

Language:PythonLicense:MITStargazers:4662Issues:144Issues:29

mindspore

MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.

Language:C++License:Apache-2.0Stargazers:4173Issues:149Issues:256

Torch-Pruning

[CVPR 2023] Towards Any Structural Pruning; LLMs / SAM / Diffusion / Transformers / YOLOv8 / CNNs

Language:PythonLicense:MITStargazers:2483Issues:34Issues:333

KuiperInfer

带你从零实现一个高性能的深度学习推理库,支持大模型 llama2 、Unet、Yolov5、Resnet等模型的推理。Implement a high-performance deep learning inference library step by step

Language:C++License:MITStargazers:2253Issues:23Issues:26

micronet

micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape

Language:PythonLicense:MITStargazers:2205Issues:40Issues:109

aimet

AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.

Language:PythonLicense:NOASSERTIONStargazers:2029Issues:50Issues:1334

ppq

PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.

Language:PythonLicense:Apache-2.0Stargazers:1465Issues:17Issues:219

mmrazor

OpenMMLab Model Compression Toolbox and Benchmark.

Language:PythonLicense:Apache-2.0Stargazers:1424Issues:20Issues:270

torchdistill

A coding-free framework built on PyTorch for reproducible deep learning studies. 🏆25 knowledge distillation methods presented at CVPR, ICLR, ECCV, NeurIPS, ICCV, etc are implemented so far. 🎁 Trained models, training logs and configurations are available for ensuring the reproducibiliy and benchmark.

Language:PythonLicense:MITStargazers:1329Issues:19Issues:46

onnx-modifier

A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.

Language:JavaScriptLicense:MITStargazers:1223Issues:11Issues:100

torchdynamo

A Python-level JIT compiler designed to make unmodified PyTorch programs faster.

Language:PythonLicense:BSD-3-ClauseStargazers:980Issues:46Issues:567

MQBench

Model Quantization Benchmark

Language:ShellLicense:Apache-2.0Stargazers:741Issues:14Issues:196

TinyNeuralNetwork

TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.

Language:PythonLicense:MITStargazers:729Issues:22Issues:134

nndeploy

nndeploy是一款模型端到端部署框架。以多端推理以及基于有向无环图模型部署为内核,致力为用户提供跨平台、简单易用、高性能的模型部署体验。

Language:C++License:Apache-2.0Stargazers:459Issues:17Issues:10

how-to-learn-deep-learning-framework

how to learn PyTorch and OneFlow

BRECQ

Pytorch implementation of BRECQ, ICLR 2021

Language:PythonLicense:MITStargazers:242Issues:6Issues:42
Language:PythonLicense:BSD-3-Clause-ClearStargazers:179Issues:11Issues:3

EWGS

An official implementation of "Network Quantization with Element-wise Gradient Scaling" (CVPR 2021) in PyTorch.

Language:PythonLicense:GPL-3.0Stargazers:88Issues:5Issues:9
Language:PythonLicense:BSD-3-Clause-ClearStargazers:66Issues:6Issues:5

Neural-Network-Compression-and-Accelerator-on-Hardware

My name is Fang Biao. I'm currently pursuing my Master degree with the college of Computer Science and Engineering, Si Chuan University, Cheng Du, China. For more informantion about me and my research, you can go to [my homepage](https://github.com/hisrg). One of my research interests is architecture design for deep learning and neuromorphic computing. This is an exciting field where fresh ideas come out every day, so I'm collecting works on related topics. Welcome to join us!

IntraQ

Pytorch implementation of our paper accepted by CVPR 2022 -- IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization

Quantformer

This is the official pytorch implementation for the paper: *Quantformer: Learning Extremely Low-precision Vision Transformers*.

Language:PythonLicense:Apache-2.0Stargazers:18Issues:1Issues:1
Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0