qwopqwop200 / GPTQ-for-LLaMa

4 bits quantization of LLaMA using GPTQ

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to quantize bloom after lora/ptuning?

moonlightian opened this issue · comments

I finetuned bloom with loar and would like to quantize the model with GPTQ,
self.model = AutoModelForCausalLM.from_pretrained( self.config['checkpoint_path'], device_map='auto', ) #load adpater self.model = PeftModelForCausalLM.from_pretrained(self.model, '/tmp/bloom_ori/lora_bloom')
some errors happened like:
image
It seems that after loading adapter, there are dimension error between alibi and attention_mask. How could I get rid of these bugs and quantize model with adapter?