qwopqwop200 / GPTQ-for-LLaMa

4 bits quantization of LLaMA using GPTQ

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Why does the model quantization prompt KILLED at the end?

g558800 opened this issue · comments

Why does the model quantization prompt KILLED at the end?
无标题

It's a RAM issue, try to increase the RAM size, or add a memory swap

Resolved. thanks