ItzCrazyKns / Perplexica

Perplexica is an AI-powered search engine. It is an Open source alternative to Perplexity AI

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

cannot change the Chat model Provider to ollama

adithyaric opened this issue · comments

I want to change the Chat model Provider to my ollama on wsl but its not automatically changed

i have tried this

OLLAMA: Your Ollama API URL. You should enter it as http://host.docker.internal:PORT_NUMBER. If you installed Ollama on port 11434, use http://host.docker.internal:11434. For other ports, adjust accordingly. You need to fill this if you wish to use Ollama's models instead of OpenAI's.

and this https://github.com/ItzCrazyKns/Perplexica/?tab=readme-ov-file#ollama-connection-errors
but result are same. my ollama run at http://localhost:11434

Screenshot
image
image
image

Seems like its not able to connect to Ollama. Why are you using localhost:11434 instead of host.docker.internal:11434 or private_ip:11434?

Seems like its not able to connect to Ollama. Why are you using localhost:11434 instead of host.docker.internal:11434 or private_ip:11434?

yes i think that the connection problem with ollama. My Ollama server is running at localhost:11434, and I also tried hosting it at host.docker.internal:11434, but it seems like it's not working.

i have the same proble under win 11: I renamed sample.config.toml to config.toml then: "docker compose up -d"
and start at: http://localhost:3000/. Message: "internal server error" and no models are shown.
(ollama is runnung. docker is running. Testet with "ollama list" and "docker --version")

Seems like its not able to connect to Ollama. Why are you using localhost:11434 instead of host.docker.internal:11434 or private_ip:11434?

yes i think that the connection problem with ollama. My Ollama server is running at localhost:11434, and I also tried hosting it at host.docker.internal:11434, but it seems like it's not working.

using lm-studio loaded model hosted on localhost, but custom_openai base url in settings is set to host.docker.internal. works for me

your error is not about ollama or backend, but searxng not being able to run checks by connecting to soundcloud, find its lines (soundcloud) in ./searxng/settings.yml and remove

commented

I'm having a similar issue unrelated to soundcloud.

I have removed the network specification even from the docker-compose. Other docker containers connect without issue to the Ollama instance.

I am not seeing any errors when tailing the logs of the searchnx and background services.


Opened PR #137 that seemingly fixes this.