Does it support custom models?
yuehengwu opened this issue · comments
I tried to add the DoctorGPT model, modified the model in /public/lib/vicuna-7b, and also modified the config.json to point the cacheUrl to the local model at http://localhost:3000/lib/WebLLM/vicuna-7b/doctorGPT/.
However, the webpage shows an error:
![截屏2023-09-13 18 57 08](https://private-user-images.githubusercontent.com/18437833/267628957-ccdd1a03-efca-4cab-b02a-d1a9acf9e9a2.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjE3Mjc5MTcsIm5iZiI6MTcyMTcyNzYxNywicGF0aCI6Ii8xODQzNzgzMy8yNjc2Mjg5NTctY2NkZDFhMDMtZWZjYS00Y2FiLWIwMmEtZDFhOWFjZjllOWEyLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MjMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzIzVDA5NDAxN1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTVjOTFkMGYzZTI1MjgyYTc3MmVlYjBkODgzYjZmOGIxOGFjZTBhYmQ0NWNlMjFhYzAwMGNjNTgyNDllYWE2NmQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.DwzArneo4lHkP14GqxTZ8FVGdXK-f-L3icLuiHgqTUM)
Consider trying https://github.com/mlc-ai/web-llm-chat and create issues to the main web-llm repo for new model support requests.