MARD1NO / EETQ

Easy and Efficient Quantization for Transformers

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

EETQ

Easy & Efficient Quantization for Transformers

Table of Contents

Features

  • New🔥: Implement gemv in w8a16, performance improvement 10~30%.
  • INT8 weight only PTQ
    • High-performance GEMM kernels from FasterTransformer, original code
    • No need for quantization training
  • Optimized attention layer using Flash-Attention V2
  • Easy to use, adapt to your pytorch model with one line of code

Getting started

Environment

  • cuda:>=11.4
  • python:>=3.8
  • gcc:>= 7.4.0
  • torch:>=1.14.0
  • transformers:>=4.27.0

The above environment is the minimum configuration, and it is best to use a newer version.

Installation

Recommend using Dockerfile.

$ git clone https://github.com/NetEase-FuXi/EETQ.git
$ cd EETQ/
$ git submodule update --init --recursive
$ pip install .

If your machine has less than 96GB of RAM and lots of CPU cores, ninja might run too many parallel compilation jobs that could exhaust the amount of RAM. To limit the number of parallel compilation jobs, you can set the environment variable MAX_JOBS:

$ MAX_JOBS=4 pip install .

Usage

  1. Use EETQ in transformers.
from transformers import AutoModelForCausalLM, EetqConfig
path = "/path/to/model"
quantization_config = EetqConfig("int8")
model = AutoModelForCausalLM.from_pretrained(path, device_map="auto", quantization_config=quantization_config)

A quantized model can be saved via "saved_pretrained" and be reused again via the "from_pretrained".

quant_path = "/path/to/save/quantized/model"
model.save_pretrained(quant_path)
model = AutoModelForCausalLM.from_pretrained(quant_path, device_map="auto")
  1. Quantize torch model
from eetq.utils import eet_quantize
eet_quantize(torch_model)

Quantize torch model and save

from eetq import AutoEETQForCausalLM
from transformers import AutoTokenizer

model_name = "/path/to/your/model"
quant_path = "/path/to/quantized/model"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoEETQForCausalLM.from_pretrained(model_name)
model.quantize(quant_path)
tokenizer.save_pretrained(quant_path)
  1. Quantize torch model and optimize with flash attention
...
model = AutoModelForCausalLM.from_pretrained(model_name, config=config, torch_dtype=torch.float16)
from eetq.utils import eet_accelerator
eet_accelerator(model, quantize=True, fused_attn=True, dev="cuda:0")
model.to("cuda:0")

# inference
res = model.generate(...)
  1. Use EETQ in TGI. see this PR.
text-generation-launcher --model-id mistralai/Mistral-7B-v0.1 --quantize eetq ...
  1. Use EETQ in LoRAX. See docs here.
lorax-launcher --model-id mistralai/Mistral-7B-v0.1 --quantize eetq ...
  1. Load quantized model in vllm (doing) Support vllm
python -m vllm.entrypoints.openai.api_server --model /path/to/quantized/model  --quantization eetq --trust-remote-code

Examples

Model:

Performance

  • llama-13b (test on 3090) prompt=1024, max_new_tokens=50

About

Easy and Efficient Quantization for Transformers


Languages

Language:C++ 77.1%Language:Python 13.6%Language:Cuda 8.9%Language:CMake 0.3%Language:Dockerfile 0.1%Language:C 0.1%