SafeAILab / EAGLE

Official Implementation of EAGLE-1 and EAGLE-2

Home Page:https://arxiv.org/pdf/2406.16858

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

runtime

qspang opened this issue · comments

IRM%7{I%K_1ANXFQY{AUR
how to solve this problem?I completely cloned the project and then transferred it to it, and then directly followed this evaluation without changing the code. Then the models were downloaded from hf. If I say changes, they are targeted changes to the runtime errors (after all, they are also changed. to run successfully (if I don’t change it, an error will still be reported),

This error indicates that a nan occurred during the initial forward pass (prefill) of the base model, which seems to be unrelated to EAGLE. Could you provide more details about your experimental setup, such as the command you ran?

034e30a26e1cd87e3412b61b2761e8cThe above is my running command.Among them, llama2-7b-chat is downloaded from the meta official license, and the weight model is downloaded from the link of the project yuhuili/EAGLE-llama2-chat-7B, and according to the project requirements:pip install -r requirements.The graphics card I use is NVIDIA 3090.But I did not go through the training step in the project, but wanted to directly test the acceleration effect through the weights you gave.

When I used llama2-7b-chat-hf instead of llama2-7b-chat, I was surprised to find that it could run successfully, but vicuna-7b-v1.3 still failed. I guess the vicuna-7b-v1.3 format is not compatible with the huggingface format. Because we use transformers lib, a model weight compatible with transformers lib.I guess it is necessary to convert vicuna-7b-chat into a hugging face form like llama2-7b-chat-hf.

Sure, in order to use Huggingface.transformers, you must use the -hf weights.

But the vicuna-7b-chat I used was downloaded from hugging-face! ! ! Link: https://huggingface.co/lmsys/vicuna-7b-v1.3

Are you still encountering the same error when using vicuna-7b-chat? Can you generate normally using Huggingface's generate function?

When I used llama2-7b-chat-hf instead of llama2-7b-chat, I was surprised to find that it could run successfully, but vicuna-7b-v1.3 still failed. I guess the vicuna-7b-v1.3 format is not compatible with the huggingface format. Because we use transformers lib, a model weight compatible with transformers lib.I guess it is necessary to convert vicuna-7b-chat into a hugging face form like llama2-7b-chat-hf.

I still can't run your EAGLE project normally using vicuna-7b-chat. I haven't tried using Huggingface's generate function yet.

Can you now use vicuna-7b-chat to run your EAGLE project normally?

I can run it normally.