containers / podman-desktop-extension-ai-lab

Work with LLMs on a local environment using containers

Home Page:https://podman-desktop.io/extensions/ai-lab

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Playground does not work for whisper

rrbanda opened this issue · comments

Screenshot 2024-04-28 at 11 30 55 AM

You can see on the top right that it says "Model Serivce not running"

I have a feeling the Whisper gguf is not working anymore?

If I go to the pod controlling the ai lab, I get this error:

gguf_init_from_file: invalid magic characters 'lmgg'
llama_model_load: error loading model: llama_model_loader: failed to load model from /models/ggml-small.bin

llama_load_model_from_file: failed to load model
Traceback (most recent call last):
  File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/__main__.py", line 88, in <module>
    main()
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/__main__.py", line 74, in main
    app = create_app(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/app.py", line 138, in create_app
    set_llama_proxy(model_settings=model_settings)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/app.py", line 75, in set_llama_proxy
    _llama_proxy = LlamaProxy(models=model_settings)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/model.py", line 31, in __init__
    self._current_model = self.load_llama_from_model_settings(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/model.py", line 138, in load_llama_from_model_settings
    _model = create_fn(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/llama.py", line 314, in __init__
    self._model = _LlamaModel(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/_internals.py", line 55, in __init__
    raise ValueError(f"Failed to load model from file: {path_model}")
ValueError: Failed to load model from file: /models/ggml-small.bin
gguf_init_from_file: invalid magic characters 'lmgg'
llama_model_load: error loading model: llama_model_loader: failed to load model from /models/ggml-small.bin

llama_load_model_from_file: failed to load model
Traceback (most recent call last):
  File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/__main__.py", line 88, in <module>
    main()
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/__main__.py", line 74, in main
    app = create_app(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/app.py", line 138, in create_app
    set_llama_proxy(model_settings=model_settings)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/app.py", line 75, in set_llama_proxy
    _llama_proxy = LlamaProxy(models=model_settings)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/model.py", line 31, in __init__
    self._current_model = self.load_llama_from_model_settings(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/model.py", line 138, in load_llama_from_model_settings
    _model = create_fn(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/llama.py", line 314, in __init__
    self._model = _LlamaModel(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/_internals.py", line 55, in __init__
    raise ValueError(f"Failed to load model from file: {path_model}")
ValueError: Failed to load model from file: /models/ggml-small.bin
gguf_init_from_file: invalid magic characters 'lmgg'
llama_model_load: error loading model: llama_model_loader: failed to load model from /models/ggml-small.bin

llama_load_model_from_file: failed to load model
Traceback (most recent call last):
  File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/__main__.py", line 88, in <module>
    main()
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/__main__.py", line 74, in main
    app = create_app(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/app.py", line 138, in create_app
    set_llama_proxy(model_settings=model_settings)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/app.py", line 75, in set_llama_proxy
    _llama_proxy = LlamaProxy(models=model_settings)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/model.py", line 31, in __init__
    self._current_model = self.load_llama_from_model_settings(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/model.py", line 138, in load_llama_from_model_settings
    _model = create_fn(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/llama.py", line 314, in __init__
    self._model = _LlamaModel(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/_internals.py", line 55, in __init__
    raise ValueError(f"Failed to load model from file: {path_model}")
ValueError: Failed to load model from file: /models/ggml-small.bin