ivanfioravanti / chatbot-ollama

Chatbot Ollama is an open source chat UI for Ollama.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Previous selected model used in chat

ivanfioravanti opened this issue · comments

It seems that previous selected model is used in a chat instead of currently selected one.

I have the same kind of issue, I only have mistral:latest in Ollama and when I run the app locally, in the selector it shows mistral but on the model at the top is still llama2:latest I changed the DEFAULT_MODEL value in .env.local and ollama.ts but I alway get this error:

[OllamaError: stat /Users/antoine/.ollama/models/manifests/registry.ollama.ai/library/llama2/latest: no such file or directory] {
  name: 'OllamaError'

The only way I made it working was to directly override the model name in the chat request. I'm not really familiar with this kind of project, I can't really fix that now, if someone are keen to investigate this.

I have the same kind of issue, I only have mistral:latest in Ollama and when I run the app locally, in the selector it shows mistral but on the model at the top is still llama2:latest I changed the DEFAULT_MODEL value in .env.local and ollama.ts but I alway get this error:

[OllamaError: stat /Users/antoine/.ollama/models/manifests/registry.ollama.ai/library/llama2/latest: no such file or directory] {
  name: 'OllamaError'

The only way I made it working was to directly override the model name in the chat request. I'm not really familiar with this kind of project, I can't really fix that now, if someone are keen to investigate this.

Yes for some reason, no matter what default model you choose, the chatbot always searches for llama2:latest model in the beginning. If you keep 'llama2:latest' downloaded on your machine though, it would work and you'll be able to switch between different models.

commented

The issue lies in the ModelSelector.tsx component.
The rendered text in the selector isn't the actual selected model until the selector has an onChange event.
Working on a solution, but as is, just keep llama:latest installed.

Please try with latest version, this should be fixed now.

Works like a charm thanks @ivanfioravanti (even without llama:latest installed) !

Just a little suggestion, if Ollama is not running, the loader is just spinning, could be nice to have a message to tell the user to run Ollama.

Just a little suggestion, if Ollama is not running, the loader is just spinning, could be nice to have a message to tell the user to run Ollama. Great suggestion!!!

Adding warning message with a separate issue.