Lightning-AI / lit-llama

Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

combine adapter weights with the base model

wxl-lxw opened this issue · comments

Hello! Great work! Since I want to evaluate the entire finetuned model, I am wondering how to combine adapter weights with the base model and save it as a new model weight?
Thank you very much.