artidoro / qlora

QLoRA: Efficient Finetuning of Quantized LLMs

Home Page:https://arxiv.org/abs/2305.14314

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Should base model be dequantized when merging LoRA weights with base model?

jinyongyoo opened this issue · comments

Hi, I have a question regarding merging LoRA weights with quantized base model. For cases where we want to merge the LoRA weights back into the original model for inference, we can use merge_and_unload method. However, this obviously isn't possible for the case of quantized base models (as seen in #28).

So a common workaround I've seen is loading the base model (without quantization) and then merging the LoRA weights. But shouldn't this result in a training and inference mismatch since LoRA weights were trained using the quantized model which is different from the base model without quantization (quantization is naturally a noisy process)? I was wondering if such workaround would result in performance degradation of the final model.

Another workaround I can think of is to dequantize the quantized base model and then add the LoRA weight. This would get rid of the training / inference mismatch problem. Has there been any attempt to dequantize the base model and add the LoRA weights?

There is a mathematical hack to dequantize the base model. See my post here: #28 (comment)

Thanks! I ended up using dequantize_4bit method from bnb to dequantize the linear weights, but I think this approach is neat.

@jinyongyoo Would you mind sharing the code that you used to dequantize the model? How did you apply dequantize_4bit?

Not sure if this is 100% correct way to do it.

dequantize_4bit(module.weight.data, quant_state=module.weight.quant_state) where module is instance of bnb.nn.Linear4bit. That should get you the weight that you can use to create torch.nn.Linear

@jinyongyoo Awesome, thank you! And you just looped through every module of the model and check if it was of type bnb.nn.Linear4bit and, if it was, you replaced that module with the dequantized version?

yes