Haotong Qin's repositories
awesome-model-quantization
A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.
awesome-efficient-aigc
A list of papers, docs, codes about efficient AIGC. This repo is aimed to provide the info for efficient AIGC research, including language and vision, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.
Awesome-Efficient-LLM
A curated list for Efficient Large Language Models
Awesome-LLM-Compression
Awesome LLM compression research papers and tools.
Neural-Network-Diffusion
We introduce a novel approach for parameter generation, named neural network diffusion (\textbf{p-diff}, p stands for parameter), which employs a standard latent diffusion model to synthesize a new set of parameters
al-folio
A beautiful, simple, clean, and responsive Jekyll theme for academics
GoogleBard-VisUnderstand
How Good is Google Bard's Visual Understanding? An Empirical Study on Open Challenges
gpt4free
decentralising the Ai Industry, just some language model api's...
ColossalAI
Making big AI models cheaper, easier, and scalable
mmhuman3d
OpenMMLab 3D Human Parametric Model Toolbox and Benchmark
AOT
Associating Objects with Transformers for Video Object Segmentation
STM-Training
training script for space time memory network
Paper-Writing-Tips
Paper Writing Tips
aot-benchmark
An efficient modular implementation of Associating Objects with Transformers for Video Object Segmentation in PyTorch
micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
ai_and_memory_wall
AI and Memory Wall blog post
cv
Geoff Boeing's academic CV in LaTex
BiPointNet
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.
cnn-quantization
Quantization of Convolutional Neural networks.
DFQ
PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.