Lightning-AI / lit-llama

Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

converting Adapter to huggingface format

LamOne1 opened this issue · comments

commented

based on the discussion here: #435 (comment), the current code can only convert the base model into huggingface format. For converting adapter we a different code, I'd like to request your support for that