Mozilla-Ocho / llamafile

Distribute and run LLMs with a single file.

Home Page:https://llamafile.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

All Sorts of Issues Executing (WSL and Windows)

gjnave opened this issue · comments

Hey guys,
So I'm having a difficult time getting certain files t but does not work on wsl when I leave it as a lama file o load. Here's one example, the below file works on windows if I change it to an exe But fails to work when I leave it as a llamafile for WSL. (and

cognibuild@DESKTOP-I6N5JH7:/mnt/e/OneClickLLMs$ chmod +x rocket-3b.Q5_K_M.llamafile.exe rocket-3b.Q5_K_M.llamafile
cognibuild@DESKTOP-I6N5JH7:/mnt/e/OneClickLLMs$ ./rocket-3b.Q5_K_M.llamafile.exe rocket-3b.Q5_K_M.llamafile -ngl 9999
-bash: ./rocket-3b.Q5_K_M.llamafile.exe: Invalid argument

Then theres this, which i can't get to run on either Windows or WSL (with extension properly changed)

Meta-Llama-3-8B-Instruct.Q5_K_M.llamafile.exe
"This App Cant Run on your PC" (Big blue screen)

Any advice is appreciated

Came here to report a very similar experience.

$ chmod +x Meta-Llama-3-70B-Instruct.Q4_0.llamafile
$ ./Meta-Llama-3-70B-Instruct.Q4_0.llamafile -ngl 9999
./Meta-Llama-3-70B-Instruct.Q4_0.llamafile: Invalid argument

I'm running exactly what the README says to run and it doesn't do the thing. But I had downloaded the original llamafile when it was first released and that version worked fine. What has changed between that release and this one?

Renaming to end in .exe and running directly on Windows instead, and I get this:

image

from the README

Unfortunately, Windows users cannot make use of many of these example llamafiles because Windows has a maximum executable file size of 4GB, and all of these examples exceed that size. (The LLaVA llamafile works on Windows because it is 30MB shy of the size limit.) But don't lose heart: llamafile allows you to use external weights; this is described later in this document.

I want to know how to reduce the size to < 4GB

This seems to work on windows:
remake the llamafile from releases page to .exe

.\llamafile.exe -m "path\to\gguf\file.gguf" -ngl 9999

from the README

Unfortunately, Windows users cannot make use of many of these example llamafiles because Windows has a maximum executable file size of 4GB, and all of these examples exceed that size. (The LLaVA llamafile works on Windows because it is 30MB shy of the size limit.) But don't lose heart: llamafile allows you to use external weights; this is described later in this document.

I want to know how to reduce the size to < 4GB

The Readme says to download the weights separately in order to run the llamafile on windows.

With Windows it works great.. just unzip the file and you can load it separately with a .bat file.

As for the WSL, the.sh file should run. But it's not

Downloading llamafile-0.8.1 from the releases page, then renaming it to have an .exe extension, and using that to run the model worked for me.

It would be nice if the project's readme had similar instructions:

.\llamafile-0.8.1.exe -m "Meta-Llama-3-70B-Instruct.Q4_0.llamafile.exe" --server -ngl 9999

On an RTX 3090, I get 0.5 tokens per second.

Ran into same issue.

./Meta-Llama-3-8B-Instruct.Q5_K_M.llamafile: Invalid argument

when I disabled WIN32 interop feature as follow:

[interop]
enabled=false

got the following message:

<3>WSL (2233) ERROR: UtilAcceptVsock:250: accept4 failed 110

Ran into same issue.

./Meta-Llama-3-8B-Instruct.Q5_K_M.llamafile: Invalid argument

when I disabled WIN32 interop feature as follow:

[interop]
enabled=false

got the following message:

<3>WSL (2233) ERROR: UtilAcceptVsock:250: accept4 failed 110

Same here, have you found a fix?