ItzCrazyKns / Perplexica

Perplexica is an AI-powered search engine. It is an Open source alternative to Perplexity AI

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Docker install fails

taltoris opened this issue · comments

Describe the bug
I'm on Ubuntu 22.04, running in a VM. I don't have a lot of experience with Node / NPM.

I try a simple docker compose up, and get the following error:

Module not found: Can't resolve 'react-text-to-speech'

I can see react-text-to-speech in my node-modules folder. But yarn can't find it for some reason.

Additional context
[+] Building 10.7s (9/9) FINISHED docker:default
=> [perplexica-frontend internal] load build definition from app.dockerfile 0.0s
=> => transferring dockerfile: 308B 0.0s
=> [perplexica-frontend internal] load metadata for docker.io/library/node:alpine 0.3s
=> [perplexica-frontend internal] load .dockerignore 0.0s
=> => transferring context: 55B 0.0s
=> [perplexica-frontend 1/5] FROM docker.io/library/node:alpine@sha256:916b42f9e83466eb17d60a441a96f5cd57033bbfee6a80eae8e3249f34cf8dbe 0.0s
=> [perplexica-frontend internal] load build context 0.0s
=> => transferring context: 1.75kB 0.0s
=> CACHED [perplexica-frontend 2/5] WORKDIR /home/perplexica 0.0s
=> CACHED [perplexica-frontend 3/5] COPY ui /home/perplexica/ 0.0s
=> CACHED [perplexica-frontend 4/5] RUN yarn install 0.0s
=> ERROR [perplexica-frontend 5/5] RUN yarn build 10.4s

[perplexica-frontend 5/5] RUN yarn build:
0.454 yarn run v1.22.19
0.481 $ next build
1.167 Attention: Next.js now collects completely anonymous telemetry regarding usage.
1.167 This information is used to shape Next.js' roadmap and prioritize features.
1.167 You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
1.168 https://nextjs.org/telemetry
1.168
1.282 ▲ Next.js 14.1.4
1.282
1.369 Creating an optimized production build ...
3.319 (node:41) [DEP0040] DeprecationWarning: The punycode module is deprecated. Please use a userland alternative instead.
3.319 (Use node --trace-deprecation ... to show where the warning was created)
10.33 Failed to compile.
10.33
10.33 ./components/MessageBox.tsx
10.33 Module not found: Can't resolve 'react-text-to-speech'
10.33
10.33 https://nextjs.org/docs/messages/module-not-found
10.33
10.33 Import trace for requested module:
10.33 ./components/Chat.tsx
10.33 ./components/ChatWindow.tsx
10.33
10.34
10.34 > Build failed because of webpack errors
10.37 error Command failed with exit code 1.
10.37 info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

Try re-cloning Perplexica from Github, delete your current clone. Then try building the images again

That actually worked. Now it successfully launches. Kinda...

Perplexica just keeps a spinning loading graphic. May be related to the fact that Nxg keeps timing out for all search engines, even after increasing the timeout param.

Perplexica just keeps a spinning loading graphic.

Same issue here, looks like it is trying to contact openai, does not offer option of selecting ollama.

Perplexica just keeps a spinning loading graphic.

Same issue here, looks like it is trying to contact openai, does not offer option of selecting ollama.

Is the ollama api base is already set on the config.toml?

cat config.toml
``
[GENERAL]
PORT = 3001 # Port to run the server on
SIMILARITY_MEASURE = "cosine" # "cosine" or "dot"

[API_KEYS]
OPENAI = "" # OpenAI API key - sk-1234567890abcdef1234567890abcdef
GROQ = "" # Groq API key - gsk_1234567890abcdef1234567890abcdef

[API_ENDPOINTS]
SEARXNG = "http://localhost:32768" # SearxNG API URL
OLLAMA = "http://host.docker.internal:11434" # Ollama API URL - http://host.docker.internal:11434
``
I can talk to ollama fine from command line.

Perhaps if the UI showed some status message instead of just an animation we could see what the problem is?

whoops. Guess I copied the wrong line, reinforced by the example in the config being the same. Perhaps the config should list all three variants.

Still same, after rebooting. It appears that Gentoo has a bug where restarting docker fails to let it connect properly.

I installed docker via package manager, perhaps the non-package route includes other bits that I have not installed and which were not pulled in, like

app-containers/docker-proxy
Available versions: 0.8.0_p20230118^st
Homepage: https://github.com/docker/libnetwork
Description: Docker container networking

?

I also have this:
Firefox can’t establish a connection to the server at localhost:32768

Thanks.

whoops. Guess I copied the wrong line, reinforced by the example in the config being the same. Perhaps the config should list all three variants.

Still same, after rebooting. It appears that Gentoo has a bug where restarting docker fails to let it connect properly.

I installed docker via package manager, perhaps the non-package route includes other bits that I have not installed and which were not pulled in, like

app-containers/docker-proxy Available versions: 0.8.0_p20230118^st Homepage: https://github.com/docker/libnetwork Description: Docker container networking

?

I also have this: Firefox can’t establish a connection to the server at localhost:32768

Thanks.

Are you hosting it on Docker or where? You also need to rebuild the images (make sure the previous ones are deleted) after making a change to the compose file.

Thanks for your help, please excuse my ignorance, this is all new to me. Probably other newbies will have similar issues in the future and this discussion will help.

Can you please clarify what you mean with "rebuild the images" ?

Everything is running on my local machine. I followed the install instructions, and installed Docker via Gentoo's portage tool.

So for example, after a reboot, I start docker running, start ollama, then do your docker compose up -d step.

Is that correct? Or is there some component that I need to reinstall?

Thanks, Ian

Thanks for your help, please excuse my ignorance, this is all new to me. Probably other newbies will have similar issues in the future and this discussion will help.

Can you please clarify what you mean with "rebuild the images" ?

Everything is running on my local machine. I followed the install instructions, and installed Docker via Gentoo's portage tool.

So for example, after a reboot, I start docker running, start ollama, then do your docker compose up -d step.

Is that correct? Or is there some component that I need to reinstall?

Thanks, Ian

Re-clone Perplexica from Github, delete the previous containers and images related to Perplexica in Docker and follow the installation instructions again if you still face the issue provide more context so I can help you. If you're on a Linux based OS instead of http://host.docker.internal:11434 as your Ollama URL change it to http://private_ip_of_your_computer:11434, don't forget to replace it with the private IP of your computer where Perplexica is hosted.

Thanks. No experience with docker but Gemini taught me cli ways to manage the models.

I reinstalled, the home page loaded (yes I was surprised :-) ), but I entered a query without going to settings to set default model. So now we are back at the UI animation ... even opening the settings shows same animation.

I restarted the back end which let me reload the home page.

When I go to settings, it does not list the local running instance of llama 3 (allegedly):

So, in short, my model number is LLaMA-Base!

which version?
I'm running on version 1.0.2 of the LLaMA model!

So it looks like the back end crashes when it gets a query.
1ea16f2a42e9 perplexica-perplexica-backend "docker-entrypoint.s…" 32 minutes ago Exited (1) About a minute ago

Am using 127.0.0.1.
Have set searxng to use port 4000 since that is where it responds. That works fine, even directly.

Thanks, Ian

Found in docker log.

time="2024-05-08T15:13:42.102770168+02:00" level=error msg="[resolver] failed to query DNS server: 192.168.1.254:53, query: ;.\tIN\t AAAA" error="read udp 172.18.0.2:47896->192.168.1.254:53: i/o timeout"

I am running local DNS. Which works.

nslookup

server 192.168.1.254
Default server: 192.168.1.254
Address: 192.168.1.254#53
ibm.com
Server: 192.168.1.254
Address: 192.168.1.254#53

Non-authoritative answer:
Name: ibm.com
Address: 95.100.29.144
Name: ibm.com
Address: 2600:1416:a000:18d::3831
Name: ibm.com
Address: 2600:1416:a000:194::3831

Not sure why it is trying to go to that site ... perhaps a search. hope it's not a tracker.

Also a few of these:
level=warning msg="no trace recorder found, skipping"

Thanks. No experience with docker but Gemini taught me cli ways to manage the models.

I reinstalled, the home page loaded (yes I was surprised :-) ), but I entered a query without going to settings to set default model. So now we are back at the UI animation ... even opening the settings shows same animation.

I restarted the back end which let me reload the home page.

When I go to settings, it does not list the local running instance of llama 3 (allegedly):

So, in short, my model number is LLaMA-Base!

which version?
I'm running on version 1.0.2 of the LLaMA model!

So it looks like the back end crashes when it gets a query. 1ea16f2a42e9 perplexica-perplexica-backend "docker-entrypoint.s…" 32 minutes ago Exited (1) About a minute ago

Am using 127.0.0.1. Have set searxng to use port 4000 since that is where it responds. That works fine, even directly.

Thanks, Ian

Its very hard to communicate here, I would recommend you to join our Discord server: https://discord.gg/26aArMy8tT
There we can communicate way better and we have an entire community there ready to help and chat with you.

I interrogated Gemini to try to figure out why the backend crashes. Which led to trying docker run --network localhost (container number) and docker run --network 127.0.0.1 (container number), both exited with error.
But docker ps -a then showed containers with names like strange_albattani, crazy_yonath and intelligent_nobel.
These names show up on codesandbox.io.

I do not understand how they could be "container names" in this project ....