Ryan-yang125 / ChatLLM-Web

🗣️ Chat with LLM like Vicuna totally in your browser with WebGPU, safely, privately, and with no server. Powered by web llm.

Home Page:https://chat-llm-web.vercel.app

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Does it support custom models?

yuehengwu opened this issue · comments

I tried to add the DoctorGPT model, modified the model in /public/lib/vicuna-7b, and also modified the config.json to point the cacheUrl to the local model at http://localhost:3000/lib/WebLLM/vicuna-7b/doctorGPT/.

However, the webpage shows an error:

截屏2023-09-13 18 57 08

Consider trying https://github.com/mlc-ai/web-llm-chat and create issues to the main web-llm repo for new model support requests.