How to eval using HF?
QiaoZhennn opened this issue · comments
# Load model directly
from transformers import AutoProcessor, AutoModelForSeq2SeqLM
processor = AutoProcessor.from_pretrained("ermu2001/pllava-34b")
model = AutoModelForSeq2SeqLM.from_pretrained("ermu2001/pllava-34b")
I try to load the model using this demo code. But it shows the following error. Wonder is there any example on how to run inference using hugging face?
Unrecognized configuration class <class 'transformers.models.llava.configuration_llava.LlavaConfig'> for this kind of AutoModel: AutoModelForSeq2SeqLM. Model type should be one of BartConfig, BigBirdPegasusConfig, BlenderbotConfig, BlenderbotSmallConfig, EncoderDecoderConfig, FSMTConfig, GPTSanJapaneseConfig, LEDConfig, LongT5Config, M2M100Config, MarianConfig, MBartConfig, MT5Config, MvpConfig, NllbMoeConfig, PegasusConfig, PegasusXConfig, PLBartConfig, ProphetNetConfig, SeamlessM4TConfig, SeamlessM4Tv2Config, SwitchTransformersConfig, T5Config, UMT5Config, XLMProphetNetConfig.
my transformers version is 4.39.2
Hi,
The model weights we've uploaded is formatted with transformers peft lora. Such that doesn't supports directly loading with this transformers auto loading code yet. To load our model, you should probably check out this function in our code for reference. Using this function you should be able to load the model with PeftLanguageModel.
PLLaVA/tasks/eval/model_utils.py
Lines 39 to 125 in e2032aa
By the way, If you wish to run demo, you could execute this script.
If you want to evaluate our model directly, you could start following instructions here and prepare the data, then execute this script