kjerk / instructblip-pipeline

A multimodal inference pipeline that integrates InstructBLIP with textgen-webui for Vicuna and related models.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error: instructblip isn't supported yet

rpeinl opened this issue · comments

I'm using oobabooga on Windows 11 with an Nvidia 1080 8GB. I've actived the multimodal extension and installed your pipeline as suggested in the readme.
The I load this model https://huggingface.co/Yhyu13/instructblip-vicuna-7b-gptq-4bit
It is the only one I can find on huggingface for instructblip and gptq.
After clicking on the load button I get the following error message.

2023-09-22 20:19:46 ERROR:Failed to load the model.
Traceback (most recent call last):
File "C:\apps\oobabooga_windows\text-generation-webui\modules\ui_model_menu.py", line 194, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name, loader)
File "C:\apps\oobabooga_windows\text-generation-webui\modules\models.py", line 76, in load_model
output = load_func_maploader
File "C:\apps\oobabooga_windows\text-generation-webui\modules\models.py", line 302, in AutoGPTQ_loader
return modules.AutoGPTQ_loader.load_quantized(model_name)
File "C:\apps\oobabooga_windows\text-generation-webui\modules\AutoGPTQ_loader.py", line 57, in load_quantized
model = AutoGPTQForCausalLM.from_quantized(path_to_model, **params)
File "C:\apps\oobabooga_windows\installer_files\env\lib\site-packages\auto_gptq\modeling\auto.py", line 87, in from_quantized
model_type = check_and_get_model_type(model_name_or_path, trust_remote_code)
File "C:\apps\oobabooga_windows\installer_files\env\lib\site-packages\auto_gptq\modeling_utils.py", line 149, in check_and_get_model_type
raise TypeError(f"{config.model_type} isn't supported yet.")
TypeError: instructblip isn't supported yet.