nomic-ai / gpt4all

gpt4all: run open-source LLMs anywhere

Home Page:https://gpt4all.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

"availableGPUDevices: built without Kompute" error when installed via pip on macOS M2

simonw opened this issue · comments

Bug Report

If I install gpt4all on an Apple Silicon M2 Mac using pip like this:

pip install gpt4all

I get this error when I call GPT4All.list_gpus():

>>> from gpt4all import GPT4All
>>> GPT4All.list_gpus()
availableGPUDevices: built without Kompute
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/private/tmp/gp/venv/lib/python3.10/site-packages/gpt4all/gpt4all.py", line 622, in list_gpus
    return LLModel.list_gpus()
  File "/private/tmp/gp/venv/lib/python3.10/site-packages/gpt4all/_pyllmodel.py", line 260, in list_gpus
    raise ValueError("Unable to retrieve available GPU devices")
ValueError: Unable to retrieve available GPU devices

Steps to Reproduce

  1. Create a fresh virtual environment on a Mac: python -m venv venv && source venv/bin/activate
  2. Install GPT4All: pip install gpt4all
  3. Run this in a python shell: from gpt4all import GPT4All; GPT4All.list_gpus()

Expected Behavior

A list of GPU devices of some sort, since I believe Kompute, if available, should work with Apple Silicon.

Your Environment

  • Bindings version (e.g. "Version" from pip show gpt4all): 2.6.0
  • Operating System: Darwin Kernel Version 23.1.0 aka macOs 14.1.1 (23B81) on an M2 MacBook Pro 16-inch, 2023
  • Chat model used (if applicable): N/A

I may be misunderstanding Kompute - it looks like it might not support Apple Silicon at all?

In that case I'm confused by the docs here: https://docs.gpt4all.io/gpt4all_python.html#gpt4all.gpt4all.GPT4All.device which say:

device (str | None, default: None ) --

The processing unit on which the GPT4All model will run. It can be set to: - "cpu": Model will run on the central processing unit. - "gpu": Use Metal on ARM64 macOS, otherwise the same as "kompute". - "kompute": Use the best GPU provided by the Kompute backend. - "cuda": Use the best GPU provided by the CUDA backend. - "amd", "nvidia": Use the best GPU provided by the Kompute backend from this vendor. - A specific device name from the list returned by GPT4All.list_gpus(). Default is Metal on ARM64 macOS, "cpu" otherwise.

But when I do this:

model = GPT4All("Phi-3-mini-4k-instruct.Q4_0.gguf", device='gpu')

I get this:

availableGPUDevices: built without Kompute
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/simon/.local/share/virtualenvs/llm-gpt4all-N30dYrxH/lib/python3.11/site-packages/gpt4all/gpt4all.py", line 208, in __init__
    self.model.init_gpu(device)
  File "/Users/simon/.local/share/virtualenvs/llm-gpt4all-N30dYrxH/lib/python3.11/site-packages/gpt4all/_pyllmodel.py", line 272, in init_gpu
    all_gpus = self.list_gpus()
               ^^^^^^^^^^^^^^^^
  File "/Users/simon/.local/share/virtualenvs/llm-gpt4all-N30dYrxH/lib/python3.11/site-packages/gpt4all/_pyllmodel.py", line 260, in list_gpus
    raise ValueError("Unable to retrieve available GPU devices")
ValueError: Unable to retrieve available GPU devices

Firstly, that documentation is for a version of the python bindings that hasn't been released yet. As of the current release, the only way to use the GPU on Apple Silicon is to not pass the device argument. There is no way to force use of the CPU.

Secondly, the Metal backend is much more complete (and efficient) than a Vulkan-based backend on top of MoltenVK would be. So we've never built GPT4All with Kompute support on Apple Silicon.

Even on the latest main branch, GPT4All.list_gpus() is not implemented for the Metal backend. But I'm not aware of any devices the llama.cpp Metal backend supports that can have more than one GPU.