artidoro / qlora

QLoRA: Efficient Finetuning of Quantized LLMs

Home Page:https://arxiv.org/abs/2305.14314

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Merge checkpoint adaptor weights with model

sidracha opened this issue · comments

My fine tuning was interrupted on a 4-bit quantized model. How can I merge the weights of a checkpoint with the base model to use for inference? I couldn't find a way to do it anywhere else