nomic-ai / gpt4all

gpt4all: run open-source LLMs anywhere

Home Page:https://gpt4all.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

gpt4all-api

cesinsingapore opened this issue · comments

commented

Documentation

is there any documentation to run gpt4all api through local gpu ? , I'm using 4090 local with cuda 12.8

https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-api

the docs only showcase about cpu, a bit confused in gpu local, also I'm confused how to get that model_id and what suppose to do with num_shards

gpt4all-api has been removed, see #2314.