oobabooga / text-generation-webui-extensions

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

llava models not loading

teaxio opened this issue · comments

First thank you for this wonderful solution. It puts a smile on my face every time I use it :)
Env: Mac M2

I am trying to use LLaVa using the directions here: https://github.com/oobabooga/text-generation-webui/blob/main/extensions/multimodal/README.md
python server.py doesn't seem to work for me as it seems that the conda env is not getting loaded properly. Even if I activate the conda env that was created in start_mac.sh . So I typically just run with start_mac.sh

So I tried start_mac.sh --model wojtab_llava-7b-v0-4bit-128g --multimodal-pipeline llava-7b and start_mac.sh --model llama-7b-4bit --multimodal-pipeline minigpt4-7b (I also tried adding --wbits 4 --groupsize 128), I get this error:

2023-10-16 10:26:32 INFO:Loading wojtab_llava-7b-v0-4bit-128g...
Traceback (most recent call last):
  File "/Users/me/textgen/server.py", line 223, in <module>
    shared.model, shared.tokenizer = load_model(model_name)
  File "/Users/me/textgen/modules/models.py", line 79, in load_model
    output = load_func_map[loader](model_name)
  File "/Users/me/textgen/modules/models.py", line 136, in huggingface_loader
    model = LoaderClass.from_pretrained(path_to_model, **params)
  File "/Users/me/textgen/installer_files/env/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 565, in from_pretrained
    return model_class.from_pretrained(
  File "/Users/me/textgen/installer_files/env/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2864, in from_pretrained
    raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory models/wojtab_llava-7b-v0-4bit-128g.

The models directory for that model does Not have any of those files that it is looking for and neither does the repo on HF https://huggingface.co/wojtab/llava-7b-v0-4bit-128g/tree/main

I also tried to load the web UI without specifying the model from the command line then going to the Model tab and trying diff model loaders for the wojtab_llava-7b-v0-4bit-128g. However, I was met with errors similar to what I pasted above for compatible loader.
I have run the update_macos.sh right before submitting this issue and that did not help.

Can you please provide some guidance on how to use LLaVa in this tool? Thank you

This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.