Haotong Qin (htqin)

htqin

Geek Repo

Company:ETH Zurich

Location:Zürich, Switzerland

Home Page:https://htqin.github.io/

Twitter:@qin_haotong

Github PK Tool:Github PK Tool

Haotong Qin's repositories

awesome-model-quantization

A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.

IR-Net

This project is the PyTorch implementation of our accepted CVPR 2020 paper : forward and backward information retention for accurate binary neural networks.

awesome-efficient-aigc

A list of papers, docs, codes about efficient AIGC. This repo is aimed to provide the info for efficient AIGC research, including language and vision, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.

BiBERT

This project is the official implementation of our accepted ICLR 2022 paper BiBERT: Accurate Fully Binarized BERT.

BiPointNet

This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

Language:PythonLicense:MITStargazers:73Issues:5Issues:5

BiBench

This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binarization.

GoogleBard-VisUnderstand

How Good is Google Bard's Visual Understanding? An Empirical Study on Open Challenges

QuantSR

This project is the official implementation of our accepted NeurIPS 2023 (spotlight) paper QuantSR: Accurate Low-bit Quantization for Efficient Image Super-Resolution.

Language:PythonLicense:Apache-2.0Stargazers:30Issues:3Issues:2

BiFSMNv2

Pytorch implementation of BiFSMNv2, TNNLS 2023

IR-QLoRA

This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retention

BiFSMN

Pytorch implementation of BiFSMN, IJCAI 2022

BiMatting

This project is the official implementation of our accepted NeurIPS 2023 paper BiMatting: Efficient Video Matting via Binarization.

DSG

This project is the official implementation of our accepted IEEE TPAMI paper Diverse Sample Generation: Pushing the Limit of Data-free Quantization

Paper-Writing-Tips

Paper Writing Tips

Stargazers:2Issues:0Issues:0

AOT

Associating Objects with Transformers for Video Object Segmentation

License:BSD-3-ClauseStargazers:0Issues:0Issues:0

ai_and_memory_wall

AI and Memory Wall blog post

License:MITStargazers:0Issues:0Issues:0

al-folio

A beautiful, simple, clean, and responsive Jekyll theme for academics

License:MITStargazers:0Issues:0Issues:0

aot-benchmark

An efficient modular implementation of Associating Objects with Transformers for Video Object Segmentation in PyTorch

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

Awesome-Efficient-LLM

A curated list for Efficient Large Language Models

Stargazers:0Issues:0Issues:0

Awesome-LLM-Compression

Awesome LLM compression research papers and tools.

License:MITStargazers:0Issues:0Issues:0

cnn-quantization

Quantization of Convolutional Neural networks.

Language:PythonStargazers:0Issues:0Issues:0

ColossalAI

Making big AI models cheaper, easier, and scalable

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

cv

Geoff Boeing's academic CV in LaTex

Language:TeXLicense:MITStargazers:0Issues:1Issues:0

DFQ

PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

gpt4free

decentralising the Ai Industry, just some language model api's...

Language:PythonLicense:GPL-3.0Stargazers:0Issues:0Issues:0

micronet

micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

mmhuman3d

OpenMMLab 3D Human Parametric Model Toolbox and Benchmark

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

Neural-Network-Diffusion

We introduce a novel approach for parameter generation, named neural network diffusion (\textbf{p-diff}, p stands for parameter), which employs a standard latent diffusion model to synthesize a new set of parameters

Stargazers:0Issues:0Issues:0

STM-Training

training script for space time memory network

Language:PythonLicense:GPL-3.0Stargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0