rashadphz / farfalle

🔍 AI search engine - self-host with local or cloud LLMs

Home Page:https://www.farfalle.dev/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

No Search Results

boydthomson opened this issue · comments

Followed instructions on an LXC running updated Debian 12. Set my OpenAI API key and set SEARCH_PROVIDER=searxng. Search looks like it's executing, and I'm prompted for another chat entry, but no results show up.

I tried SEARCH_PROVIDER=tavily and got the same results.

Logs for frontend and searxng show no errors but...

:~/farfalle# docker logs farfalle-backend-1
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

I'm confused about whether or not this is telling me to install PyTorch somewhere or not...

Hey Boyd. No need to worry about the PyTorch message.

When you visit: https://localhost:8080/ do you see this screen?
image

Thanks for the tips.

I do not have a connection to searxng. "Firefox can’t establish a connection to the server at 192.168.180.30:8080."
Port 3000 works fine for the front end.

I tried changing https to http in the docker-compose.dev.yaml but that made no difference.

Logs show no errors:
root@search:/farfalle# docker logs searxng
SearXNG version 2024.5.29+0fa81fc78
Use existing /etc/searxng/uwsgi.ini
Use existing /etc/searxng/settings.yml
Listen on 0.0.0.0:8080
[uWSGI] getting INI configuration from /etc/searxng/uwsgi.ini
[uwsgi-static] added mapping for /static => /usr/local/searxng/searx/static
*** Starting uWSGI 2.0.23 (64bit) on [Thu May 30 17:44:21 2024] ***
compiled with version: 13.2.1 20231014 on 30 November 2023 14:34:33
os: Linux-6.5.13-5-pve #1 SMP PREEMPT_DYNAMIC PMX 6.5.13-5 (2024-04-05T11:03Z)
nodename: 97a27b951196
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 2
current working directory: /usr/local/searxng
detected binary path: /usr/sbin/uwsgi
chdir() to /usr/local/searxng/searx/
your processes number limit is 63239
your memory page size is 4096 bytes
detected max file descriptor number: 524288
building mime-types dictionary from file /etc/mime.types...1390 entry found
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address 0.0.0.0:8080 fd 3
Python version: 3.11.9 (main, Apr 14 2024, 13:40:00) [GCC 13.2.1 20231014]
Python main interpreter initialized at 0x75cec3a36718
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 362016 bytes (353 KB) for 8 cores
*** Operational MODE: preforking+threaded ***
added /usr/local/searxng/ to pythonpath.
spawned uWSGI master process (pid: 7)
spawned uWSGI worker 1 (pid: 10, cores: 4)
spawned uWSGI worker 2 (pid: 11, cores: 4)
spawned 4 offload threads for uWSGI worker 1
spawned 4 offload threads for uWSGI worker 2
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x75cec3a36718 pid: 10 (default app)
WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x75cec3a36718 pid: 11 (default app)
root@search:
/farfalle#

I don't understand why the log font changed to 'strike-through'

I removed the 127.0.0.0 from the searxng section of docker-compose.dev.yaml and I do get the SearXNG page on port 8080, and it does work for searches. However, I still get no results for searches on the frontend at port 3000.

I am running farfalle on a server on my LAN. I finally got it working by following this instruction:
https://github.com/rashadphz/farfalle/blob/main/custom-setup-instructions.md

And changing the https to http in docker-compose.dev.yaml:
environment:
- SEARXNG_BASE_URL=http://${SEARXNG_BASE_URL:-localhost}/

I'm trying to figure out how to suggest a pull request to use the output of ip route get 8.8.8.8 | awk -F"src " 'NR==1{split($2,a," ");print a[1]}' to populate the NEXT_PUBLIC_API_URL value.

Works remotely when I'm on the same LAN as the server, but I get the same problem (no results loading) when I'm accessing it away from the LAN, through a reverse proxy server. I'm guessing it has something to do with:

environment:
- NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL:-http://192.168.180.30:8000}
- NEXT_PUBLIC_LOCAL_MODE_ENABLED=${NEXT_PUBLIC_LOCAL_MODE_ENABLED:-true}

Hey, I updated the docker-compose and added a .env-template. This custom setup should be more clear and flexible now. The new instructions are in the README. Let me know if you have any problems setting this up!