Mozilla-Ocho / llamafile

Distribute and run LLMs with a single file.

Home Page:https://llamafile.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

run-detectors: unable to find an interpreter for ./Meta-Llama-3-8B-Instruct.Q6_K.llamafile

superkuh opened this issue · comments

Edit: nevermind, this is covered in https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#gotchas

Hi. I downloaded Meta-Llama-3-8B-Instruct.Q6_K.llamafile and chmod +x it and attempted to run it I but I get an error.

superkuh@janus:~/app_installs/llama.cpp/models$ ./Meta-Llama-3-8B-Instruct.Q6_K.llamafile -ngl 9999
run-detectors: unable to find an interpreter for ./Meta-Llama-3-8B-Instruct.Q6_K.llamafile

superkuh@janus:~/app_installs/llama.cpp/models$ ./Meta-Llama-3-8B-Instruct.Q6_K.llamafile --gpu disable
run-detectors: unable to find an interpreter for ./Meta-Llama-3-8B-Instruct.Q6_K.llamafile

superkuh@janus:~/app_installs/llama.cpp/models$ ./Meta-Llama-3-8B-Instruct.Q6_K.llamafile
run-detectors: unable to find an interpreter for ./Meta-Llama-3-8B-Instruct.Q6_K.llamafile

My computer is a bog standard Ryzen 5 3600 running Debian 11. Since this fails in CPU mode too it's probably not related to my GPU, an AMD RX 580 8GB. Anyone know if there's a fix for this or even what's going on?

Using plain non-APE non-llamafile llama.cpp ./main and ./server work fine as an aside.