Imported `.bin` models don't generate meaningful answers
tsilvs opened this issue · comments
Describe the bug
Any local model I choose answers me with Hello, I am Bavarder, a Chit-Chat AI
instead of generating a response to my prompt.
To Reproduce
Steps to reproduce the behavior:
- Enable only
local
provider in Preferences, disable all others - Download models supported by GPT4All (e.g.
nous-hermes-13b.ggmlv3.q4_0.bin
) - Place or hardlink the
.bin
file in~/.var/app/io.github.Bavarder.Bavarder/cache/bavarder/models
directory - Open Bavarder app
- Click on a cloud button to get a crossed cloud (would be nice to add mouse hover tooltips to it as well to know how to call it properly)
- Open Dot Menu, select your local model (e.g.
nous-hermes-13b.ggmlv3.q4_0.bin
) - Create new chat
- Send a prompt
- Get an answer with
Hello, I am Bavarder, a Chit-Chat AI
Expected behavior
When I send a prompt, I should receive an answer generated by a selected model.
Screenshots
Environment
io.github.Bavarder.Bavarder 1.0.0
Environment: GNOME
Gtk: 4.12.5
Python: 3.11.9
OS: Linux 6.8.10-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Fri May 17 21:20:15 UTC 2024
Providers: ['local']
Additional context
I already have a lot of GiBs of LLMs downloaded on my PC. My connection is metered & slow, storage is limited as well.
I can't afford downloading hundreds of GiBs of data again. It will be slow, expensive & polluting.