rashadphz / farfalle

🔍 AI search engine - self-host with local or cloud LLMs

Home Page:https://www.farfalle.dev/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error: 500 - Request URL is missing an 'http://' or 'https://' protocol

hirowa opened this issue · comments

Description:

When attempting to make a request, I encountered a 500 error.

Error Message:

"500: Request URL is missing an 'http://' or 'https://' protocol."

Environment:

  • OS: Windows 11
  • Farfalle: Installed locally running on Docker Desktop
  • Ollama: Running on the default port
  • SearxNG: Used as the search provider

Additional Context:

  • Tried using the same configuration with "Groq" and it worked without issues.

Please let me know if further details are required.

I have the same error. Below is log from the backend

2024-06-02 20:22:35 INFO: 172.18.0.1:63658 - "GET / HTTP/1.1" 404 Not Found
2024-06-02 20:22:35 INFO: 172.18.0.1:63658 - "GET /favicon.ico HTTP/1.1" 404 Not Found
2024-06-02 20:22:41 INFO: 172.18.0.1:57224 - "OPTIONS /chat HTTP/1.1" 200 OK
2024-06-02 20:22:41 INFO: 172.18.0.1:57238 - "POST /chat HTTP/1.1" 200 OK
2024-06-02 20:22:43 Traceback (most recent call last):
2024-06-02 20:22:43 File "/workspace/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
2024-06-02 20:22:43 yield
2024-06-02 20:22:43 File "/workspace/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 373, in handle_async_request
2024-06-02 20:22:43 resp = await self._pool.handle_async_request(req)
2024-06-02 20:22:43 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-06-02 20:22:43 File "/workspace/.venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 167, in handle_async_request
2024-06-02 20:22:43 raise UnsupportedProtocol(
2024-06-02 20:22:43 httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.
2024-06-02 20:22:43
2024-06-02 20:22:43 The above exception was the direct cause of the following exception:
2024-06-02 20:22:43
2024-06-02 20:22:43 Traceback (most recent call last):
2024-06-02 20:22:43 File "/workspace/src/backend/chat.py", line 111, in stream_qa_objects
2024-06-02 20:22:43 async for completion in response_gen:
2024-06-02 20:22:43 File "/workspace/.venv/lib/python3.11/site-packages/llama_index/core/llms/callbacks.py", line 280, in wrapped_gen
2024-06-02 20:22:43 async for x in f_return_val:
2024-06-02 20:22:43 File "/workspace/.venv/lib/python3.11/site-packages/llama_index/llms/ollama/base.py", line 401, in gen
2024-06-02 20:22:43 async with client.stream(
2024-06-02 20:22:43 File "/usr/local/lib/python3.11/contextlib.py", line 210, in aenter
2024-06-02 20:22:43 return await anext(self.gen)
2024-06-02 20:22:43 ^^^^^^^^^^^^^^^^^^^^^
2024-06-02 20:22:43 File "/workspace/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1617, in stream
2024-06-02 20:22:43 response = await self.send(
2024-06-02 20:22:43 ^^^^^^^^^^^^^^^^
2024-06-02 20:22:43 File "/workspace/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1661, in send
2024-06-02 20:22:43 response = await self._send_handling_auth(
2024-06-02 20:22:43 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-06-02 20:22:43 File "/workspace/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1689, in _send_handling_auth
2024-06-02 20:22:43 response = await self._send_handling_redirects(
2024-06-02 20:22:43 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-06-02 20:22:43 File "/workspace/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1726, in _send_handling_redirects
2024-06-02 20:22:43 response = await self._send_single_request(request)
2024-06-02 20:22:43 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-06-02 20:22:43 File "/workspace/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1763, in _send_single_request
2024-06-02 20:22:43 response = await transport.handle_async_request(request)
2024-06-02 20:22:43 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-06-02 20:22:43 File "/workspace/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 372, in handle_async_request
2024-06-02 20:22:43 with map_httpcore_exceptions():
2024-06-02 20:22:43 File "/usr/local/lib/python3.11/contextlib.py", line 158, in exit
2024-06-02 20:22:43 self.gen.throw(typ, value, traceback)
2024-06-02 20:22:43 File "/workspace/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
2024-06-02 20:22:43 raise mapped_exc(message) from exc
2024-06-02 20:22:43 httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.
2024-06-02 20:22:43
2024-06-02 20:22:43 During handling of the above exception, another exception occurred:
2024-06-02 20:22:43
2024-06-02 20:22:43 Traceback (most recent call last):
2024-06-02 20:22:43 File "/workspace/src/backend/main.py", line 97, in generator
2024-06-02 20:22:43 async for obj in stream_qa_objects(chat_request):
2024-06-02 20:22:43 File "/workspace/src/backend/chat.py", line 140, in stream_qa_objects
2024-06-02 20:22:43 raise HTTPException(status_code=500, detail=detail)
2024-06-02 20:22:43 fastapi.exceptions.HTTPException: 500: Request URL is missing an 'http://' or 'https://' protocol.
2024-06-02 20:22:43

Even with Groq key, it also having issue.

Did you modify your OLLAMA_HOST environment variable at all?

I had it pointing to 0.0.0.0. Deleted that so it had the default host now.
Now the error showing is "500: All connection attempts failed"

image

Can advise what to change if I want to point to a remote ollama host ?

Can I just change the line below

  - OLLAMA_HOST=${OLLAMA_HOST:-http://172.16.66.201:11434}

Full docker-compose.dev.yaml as below

services:
backend:
build:
context: .
dockerfile: ./src/backend/Dockerfile
restart: always
ports:
- "8000:8000"
environment:
- OLLAMA_HOST=${OLLAMA_HOST:-http://172.16.66.201:11434}
- TAVILY_API_KEY=${TAVILY_API_KEY}
- BING_API_KEY=${BING_API_KEY}
- SERP_API_KEY=${SERP_API_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- GROQ_API_KEY=${GROQ_API_KEY}
- ENABLE_LOCAL_MODELS=${ENABLE_LOCAL_MODELS:-True}
- SEARCH_PROVIDER=${SEARCH_PROVIDER:-tavily}
- SEARXNG_BASE_URL=${SEARXNG_BASE_URL:-http://host.docker.internal:8080}
- REDIS_URL=${REDIS_URL}
develop:
watch:
- action: sync
path: ./src/backend
target: /workspace/src/backend
extra_hosts:
- "host.docker.internal:host-gateway"
frontend:
depends_on:
- backend
build:
context: .
dockerfile: ./src/frontend/Dockerfile
restart: always
environment:
- NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL:-http://172.16.66.201:8000}
- NEXT_PUBLIC_LOCAL_MODE_ENABLED=${NEXT_PUBLIC_LOCAL_MODE_ENABLED:-true}
ports:
- "3333:3000"
develop:
watch:
- action: sync
path: ./src/frontend
target: /app
ignore:
- node_modules/

searxng:
container_name: searxng
image: docker.io/searxng/searxng:latest
restart: unless-stopped
networks:
- searxng
ports:
- "8080:8080"
volumes:
- ./searxng:/etc/searxng:rw
environment:
- SEARXNG_BASE_URL=http://${SEARXNG_BASE_URL:-localhost}/

networks:
searxng:

commented

Hey, I updated the docker-compose and added a .env-template. This custom setup should be more clear and flexible now. The new instructions are in the README. Let me know if you have any problems setting this up!

You should be able to modify OLLAMA_HOST in your .env.