Lightning-AI / lit-llama

Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

inference finetuned model using LoRa in Huggingface format

LamOne1 opened this issue · comments

commented

Hello,
I used this script to merge Lora weights to the base model. Then, I used this script to convert my model to huggingface format.
But when I inference the model in Huggingface it never output end token, it looks like a pretrained model rather than a finetuned one.
here is my inference pipeline:

response = generation_pipeline(prompt,
        pad_token_id=tokenizer.eos_token_id,
        do_sample=False,
        num_beams=4,
        max_length=500,
        top_p=0.1,
        top_k=20,
        repetition_penalty = 3.0,
        no_repeat_ngram_size=3)[0]['generated_text']

I'm not sure if the inference pipeline matches the one in this repository.
The reason why I want to inference my model there because I'm facing an issue in the generate script & I want to use beam search.

I appreciate your help.

commented

update:
convert_lora_weights is working as expected, as I tested the converted model using generate.py and it generated the eos token. The problem is either is due to the conversion to huggingface format or the inference pipeline

Are you using the 7b parameter model? This was the one I tested my conversion script on

commented

Yes I used 7B. How did you create the inference pipeline? Let me test it with my model

I added another commit to my PR which should help streamline the conversion process.

I used the following generation config:

generation_config = GenerationConfig(
    temperature=1,
    typical_p=1,
    max_new_tokens=512,
    num_beams=1,
    do_sample=True,
)

I would recommend trying to sample with minimal/default parameters at first though before running a more intricate sampling algorithm like beam search or typical sampling.

update: convert_lora_weights is working as expected, as I tested the converted model using generate.py and it generated the eos token. The problem is either is due to the conversion to huggingface format or the inference pipeline

If it generates the token when you call generate, this is likely an issue with the weights that your fine-tuning process has produced. But it may help to have the model in a huggingface format so you can experiment with different sampling approaches, and look at some of the lower-likelihood logits when the token gets generated to see if they make sense, etc.

commented

Thank you @wjurayj I really appreciate your help.
Unfortunately, the model still acts as a pretrained one even after using your inference pipeline and use the updated code.
The model doesn't even recognize the context it was fine-tuned on :

Below is an instruction that describes a task. "
        "Write a response that appropriately completes the request.\n\n"
        f"### Instruction:\n{example['instruction']}\n\n### Response:

Maybe I should mention that I don't use LLaMA tokenizer, I used my own one that has 64K vocab size, so I changed the generated config file. Also I changed the ids for the pad and eos tokens; my eos token is 0 and pad token is 2 while the generated config shows them as 2 and 0 respectively.

commented

I fixed the issue! The problem was caused by the context! :) The context or the instruction I provided was not the exact of the one I provided in the training (there was a difference in the number of space!)
Thank you so much @wjurayj ! Thank you for your time and effort!