Haotong Qin (htqin)

htqin

Geek Repo

Company:ETH Zurich

Location:Zürich, Switzerland

Home Page:https://htqin.github.io/

Twitter:@qin_haotong

Github PK Tool:Github PK Tool

Haotong Qin's repositories

QuantSR

This project is the official implementation of our accepted NeurIPS 2023 (spotlight) paper QuantSR: Accurate Low-bit Quantization for Efficient Image Super-Resolution.

Language:PythonLicense:Apache-2.0Stargazers:35Issues:0Issues:0

awesome-model-quantization

A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.

Stargazers:1675Issues:0Issues:0

awesome-efficient-aigc

A list of papers, docs, codes about efficient AIGC. This repo is aimed to provide the info for efficient AIGC research, including language and vision, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.

Stargazers:113Issues:0Issues:0

IR-QLoRA

This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retention

Language:PythonStargazers:32Issues:0Issues:0

Awesome-Efficient-LLM

A curated list for Efficient Large Language Models

Stargazers:1Issues:0Issues:0

Awesome-LLM-Compression

Awesome LLM compression research papers and tools.

License:MITStargazers:0Issues:0Issues:0

BiMatting

This project is the official implementation of our accepted NeurIPS 2023 paper BiMatting: Efficient Video Matting via Binarization.

Language:PythonStargazers:20Issues:0Issues:0

BiBench

This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binarization.

Language:PythonStargazers:48Issues:0Issues:0

Neural-Network-Diffusion

We introduce a novel approach for parameter generation, named neural network diffusion (\textbf{p-diff}, p stands for parameter), which employs a standard latent diffusion model to synthesize a new set of parameters

Stargazers:0Issues:0Issues:0

al-folio

A beautiful, simple, clean, and responsive Jekyll theme for academics

License:MITStargazers:0Issues:0Issues:0

GoogleBard-VisUnderstand

How Good is Google Bard's Visual Understanding? An Empirical Study on Open Challenges

Stargazers:30Issues:0Issues:0

BiBERT

This project is the official implementation of our accepted ICLR 2022 paper BiBERT: Accurate Fully Binarized BERT.

Language:PythonStargazers:78Issues:0Issues:0

gpt4free

decentralising the Ai Industry, just some language model api's...

License:GPL-3.0Stargazers:0Issues:0Issues:0

DSG

This project is the official implementation of our accepted IEEE TPAMI paper Diverse Sample Generation: Pushing the Limit of Data-free Quantization

Language:PythonStargazers:14Issues:0Issues:0

ColossalAI

Making big AI models cheaper, easier, and scalable

License:Apache-2.0Stargazers:0Issues:0Issues:0

BiFSMNv2

Pytorch implementation of BiFSMNv2, TNNLS 2023

Language:PythonStargazers:24Issues:0Issues:0

BiFSMN

Pytorch implementation of BiFSMN, IJCAI 2022

Language:PythonStargazers:20Issues:0Issues:0

mmhuman3d

OpenMMLab 3D Human Parametric Model Toolbox and Benchmark

License:Apache-2.0Stargazers:0Issues:0Issues:0

AOT

Associating Objects with Transformers for Video Object Segmentation

License:BSD-3-ClauseStargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

STM-Training

training script for space time memory network

License:GPL-3.0Stargazers:0Issues:0Issues:0

Paper-Writing-Tips

Paper Writing Tips

Stargazers:2Issues:0Issues:0

aot-benchmark

An efficient modular implementation of Associating Objects with Transformers for Video Object Segmentation in PyTorch

License:BSD-3-ClauseStargazers:0Issues:0Issues:0

micronet

micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape

License:MITStargazers:0Issues:0Issues:0

ai_and_memory_wall

AI and Memory Wall blog post

License:MITStargazers:0Issues:0Issues:0

cv

Geoff Boeing's academic CV in LaTex

License:MITStargazers:0Issues:0Issues:0

BiPointNet

This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

Language:PythonLicense:MITStargazers:73Issues:0Issues:0

cnn-quantization

Quantization of Convolutional Neural networks.

Stargazers:0Issues:0Issues:0

DFQ

PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.

License:MITStargazers:0Issues:0Issues:0

IR-Net

This project is the PyTorch implementation of our accepted CVPR 2020 paper : forward and backward information retention for accurate binary neural networks.

Language:PythonStargazers:174Issues:0Issues:0