qwopqwop200 / GPTQ-for-LLaMa

4 bits quantization of LLaMA using GPTQ

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Finetuning Quantized LLaMA

Qifeng-Wu99 opened this issue · comments

Hello,

I really appreciate your work done here.

I wonder if you could also release a python script on finetuning quantized LLaMA on a customized dataset.

It is inevitable that quantization would damage performance, while finetuning could make the model perform better on a user-desired dataset.

Thank you.