Minsoo Kim (MarsJacobs)

MarsJacobs

Geek Repo

Company:Hanyang University

Location:Seoul

Home Page:https://marsjacobs.github.io/

Github PK Tool:Github PK Tool

Minsoo Kim's repositories

kd-qat-large-enc

[EMNLP 2022 main] Code for "Understanding and Improving Knowledge Distillation for Quantization-Aware-Training of Large Transformer Encoders"

Language:Jupyter NotebookStargazers:7Issues:1Issues:0

ti-kd-qat

[EACL 2023 main] This Repository provides a Pytorch implementation of Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers

mbv1_brevitas

This is MobileNetv1 Brevitas based Quantization-aware-Training Framework

Language:PythonStargazers:1Issues:1Issues:0
Stargazers:0Issues:0Issues:0

efficientdet-pytorch

A PyTorch impl of EfficientDet faithful to the original Google impl w/ ported weights

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

finetune-transformer-lm

Code and model for the paper "Improving Language Understanding by Generative Pre-Training"

Language:PythonLicense:MITStargazers:0Issues:0Issues:0
Language:HTMLStargazers:0Issues:1Issues:0

llm-awq

AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

lsq-net

Unofficial implementation of LSQ-Net, a neural network quantization framework

Language:PythonLicense:MITStargazers:0Issues:0Issues:0
Language:HTMLLicense:MITStargazers:0Issues:1Issues:0

model-quantization

Collections of model quantization algorithms

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0

Pretrained-Language-Model

Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.

Language:PythonStargazers:0Issues:0Issues:0
Language:PythonLicense:MITStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:1Issues:0

TernGEMM

TernGEMM: General Matrix Multiply Library with Ternary Weights for Fast DNN Inference

Language:C++License:GPL-3.0Stargazers:0Issues:0Issues:0

TSLD

[NeurIPS 2023] Token-Scaled Logit Distillation for Ternary Weight Generative Language Models

Stargazers:0Issues:0Issues:0

Yet-Another-EfficientDet-Pytorch

The pytorch re-implement of the official efficientdet with SOTA performance in real time and pretrained weights.

Language:Jupyter NotebookLicense:LGPL-3.0Stargazers:0Issues:0Issues:0