qwopqwop200 / GPTQ-for-LLaMa

4 bits quantization of LLaMA using GPTQ

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

can it support openllama model?

Ted8000 opened this issue · comments

i use it for openllama, but th result generate seems not right.