Haotong Qin's repositories
awesome-model-quantization
A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.
awesome-efficient-aigc
A list of papers, docs, codes about efficient AIGC. This repo is aimed to provide the info for efficient AIGC research, including language and vision, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.
BiPointNet
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.
GoogleBard-VisUnderstand
How Good is Google Bard's Visual Understanding? An Empirical Study on Open Challenges
Paper-Writing-Tips
Paper Writing Tips
AOT
Associating Objects with Transformers for Video Object Segmentation
ai_and_memory_wall
AI and Memory Wall blog post
al-folio
A beautiful, simple, clean, and responsive Jekyll theme for academics
aot-benchmark
An efficient modular implementation of Associating Objects with Transformers for Video Object Segmentation in PyTorch
Awesome-Efficient-LLM
A curated list for Efficient Large Language Models
Awesome-LLM-Compression
Awesome LLM compression research papers and tools.
cnn-quantization
Quantization of Convolutional Neural networks.
ColossalAI
Making big AI models cheaper, easier, and scalable
DFQ
PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.
gpt4free
decentralising the Ai Industry, just some language model api's...
micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
mmhuman3d
OpenMMLab 3D Human Parametric Model Toolbox and Benchmark
Neural-Network-Diffusion
We introduce a novel approach for parameter generation, named neural network diffusion (\textbf{p-diff}, p stands for parameter), which employs a standard latent diffusion model to synthesize a new set of parameters
STM-Training
training script for space time memory network