QLoRA: Efficient Finetuning of Quantized LLMs
Home Page:https://arxiv.org/abs/2305.14314
Geek Repo:Geek Repo
Github PK Tool:Github PK Tool
dekoponTree opened this issue 5 months ago · comments
Llama-7b with QLoRA finetuned on Alpaca has different results in Table 4 and Table 5.
The former is 39.0, while the latter is 38.8