jacoblee93 / fully-local-pdf-chatbot

Yes, it's another chat over documents implementation... but this one is entirely local!

Home Page:https://webml-demo.vercel.app

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Failed to fetch

ushakrishnan opened this issue · comments

Tested both with the web vercel app online version and also downloaded source code and ran it locally. Both have the same error -
There was an issue with querying your PDM : Failed to fetch. Make sure you are running Ollama.

  • I can confirm both mistral and llama2 have been pulled, and logging into the docker, I can run prompts and get output.
  • I am running Ollama docker and able to successfully curl from my desktop.
  • I applied the OLLAMA_ORIGINS for each of the two endpoints while testing. And neither of the above works.

BTW - it was clean install, thank you for taking care to add versions and keep package clean!

Thanks for the kind words!

Do you see anything in the Network tab if you open devtools? Also, what browser are you in?

Windows 11; Microsoft Edge - same error on both

  • tried locally on windows
  • tried on WSL (Ubuntu)

ollama runs as docker image on Windows 11

ollama is running (can reach API - http://localhost:11434) from both windows 11 desktop and WSL Ubuntu. I can run curl from both windows and WSL Ubuntu for API).

One issue certainly was . . . ollama running on docker is on port 11434 (not 11435). Also, when the docker starts, ollama is automagically started (default run) and hence unable to run serve with CORS. Looks like adding cors is not available yet. Will try on WSL without docker.

Other ideas?

Ah yeah that would do it - are you able to change the start command/exposed port?

Yes, I can change the port. But, the 'ollama serve' is part of startup script of the docker. I do not believe it is possible to add allow origins for CORS after startup.

So - I can use docker only if I change the docker and startup script.

Is there a way to stop ollama service and restart with additional parameters? If so, that can help. But cant find how.

I was successfully about to run it on WSL Ubuntu (on Windows) and run both the app and ollama in WSL Ubuntu. However, on Ubuntu, I had to kill the service using sudo service ollama stop and then restarted the service with 'OLLAMA_ORIGINS=http://localhost:3000 OLLAMA_HOST=127.0.0.1:11434 ollama serve'.

There is no universal "stop process for ollama serve". The only way is to kill the process using the OS process or stop service is installed as service (which was my WSL Ubuntu on Win 11 case).

Note: Docker has the start up script with ollama serve. So killing the process kills the docker. For now, going to recreate the docker with startup changed to allow the origins as needed. Will update.

For those of you who use Windows OS, if you would prefer not to use subsystem . . . Use Ollama Docker.

Changes to be made, if you are using Ollama Docker - (you can use specific domain instead of * which is dangerous!)

  • docker run -d -v ./ollama -p 11434:11434 -e OLLAMA_ORIGINS="*" --name ollama ollama/ollama
  • For simplicity I changed the ports in the code to 11434 to match docker exposed port. You can also do port swapping during docker run.