talenz's starred repositories

segment-anything-2

The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:7747Issues:0Issues:0

llmc

This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit".

Language:PythonLicense:Apache-2.0Stargazers:159Issues:0Issues:0

GroundingDINO

[ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"

Language:PythonLicense:Apache-2.0Stargazers:5866Issues:0Issues:0

I-BERT

[ICML'21 Oral] I-BERT: Integer-only BERT Quantization

Language:PythonLicense:MITStargazers:219Issues:0Issues:0

Awesome-Multimodal-Large-Language-Models

:sparkles::sparkles:Latest Advances on Multimodal Large Language Models

Stargazers:10986Issues:0Issues:0

kinetics-downloader

Download DeepMind's Kinetics dataset.

Language:PythonLicense:MITStargazers:260Issues:0Issues:0

MQBench

Model Quantization Benchmark

Language:ShellLicense:Apache-2.0Stargazers:742Issues:0Issues:0

F8Net

[ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization

Language:PythonLicense:NOASSERTIONStargazers:95Issues:0Issues:0

Nonuniform-to-Uniform-Quantization

Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. In CVPR 2022.

Language:PythonStargazers:111Issues:0Issues:0

larq

An Open-Source Library for Training Binarized Neural Networks

Language:PythonLicense:Apache-2.0Stargazers:697Issues:0Issues:0

ReCU

Pytorch implementation of our paper accepted by ICCV 2021 -- ReCU: Reviving the Dead Weights in Binary Neural Networks http://arxiv.org/abs/2103.12369

Language:PythonStargazers:39Issues:0Issues:0

DAQ

An official PyTorch implementation of the paper "Distance-aware Quantization", ICCV 2021.

Language:PythonLicense:GPL-3.0Stargazers:45Issues:0Issues:0

temporal-shift-module

[ICCV 2019] TSM: Temporal Shift Module for Efficient Video Understanding

Language:PythonLicense:MITStargazers:2044Issues:0Issues:0

BiDet

This is the official pytorch implementation for paper: BiDet: An Efficient Binarized Object Detector, which is accepted by CVPR2020.

Language:PythonLicense:MITStargazers:174Issues:0Issues:0

PyHessian

PyHessian is a Pytorch library for second-order based analysis and training of Neural Networks

Language:Jupyter NotebookLicense:MITStargazers:664Issues:0Issues:0

FracBits

Neural Network Quantization With Fractional Bit-widths

Language:PythonStargazers:11Issues:0Issues:0

Double-Win-Quant

[ICML 2021] "Double-Win Quant: Aggressively Winning Robustness of Quantized DeepNeural Networks via Random Precision Training and Inference" by Yonggan Fu, Qixuan Yu, Meng Li, Vikas Chandra, Yingyan Lin

Language:PythonLicense:MITStargazers:12Issues:0Issues:0

Yet-Another-EfficientDet-Pytorch

The pytorch re-implement of the official efficientdet with SOTA performance in real time and pretrained weights.

Language:Jupyter NotebookLicense:LGPL-3.0Stargazers:5198Issues:0Issues:0

scale-adjusted-training

PyTorch implementation of Towards Efficient Training for Neural Network Quantization

Language:PythonStargazers:15Issues:0Issues:0

permute-quantize-finetune

Using ideas from product quantization for state-of-the-art neural network compression.

Language:PythonLicense:NOASSERTIONStargazers:145Issues:0Issues:0

EWGS

An official implementation of "Network Quantization with Element-wise Gradient Scaling" (CVPR 2021) in PyTorch.

Language:PythonLicense:GPL-3.0Stargazers:88Issues:0Issues:0

leetcode_101

LeetCode 101:和你一起你轻松刷题(C++)

Stargazers:8127Issues:0Issues:0

FAT_Quantization

Pytorch implementation for FAT: learning low-bitwidth parametric representation via frequency-aware transformation

Language:Jupyter NotebookLicense:MITStargazers:27Issues:0Issues:0
Language:PythonLicense:MITStargazers:47Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:343Issues:0Issues:0

aimet

AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.

Language:PythonLicense:NOASSERTIONStargazers:2042Issues:0Issues:0

awesome-model-quantization

A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.

Stargazers:1749Issues:0Issues:0

BRECQ

Pytorch implementation of BRECQ, ICLR 2021

Language:PythonLicense:MITStargazers:243Issues:0Issues:0

BSQ

BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)

Language:PythonLicense:Apache-2.0Stargazers:36Issues:0Issues:0
Language:PythonLicense:MITStargazers:736Issues:0Issues:0