rashadphz / farfalle

🔍 AI search engine - self-host with local or cloud LLMs

Home Page:https://www.farfalle.dev/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

500: Model is at capacity. Please try again later.

wwjCMP opened this issue · comments

Do you see "Ollama is running" when you visit http://localhost:11434/?

I use a remote service http://192.168.101.19:11434
And I don't see Ollama receiving any requests

Is your .env set to:
OLLAMA_HOST=http://192.168.101.19:11434

Is your .env set to: OLLAMA_HOST=http://192.168.101.19:11434

services:
backend:
build:
context: .
dockerfile: ./src/backend/Dockerfile.dev
restart: always
ports:
- "8600:8000"
environment:
- OLLAMA_HOST=${OLLAMA_HOST:-http://192.168.101.19:11434}
- TAVILY_API_KEY=${TAVILY_API_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- GROQ_API_KEY=${GROQ_API_KEY}
- ENABLE_LOCAL_MODELS=${ENABLE_LOCAL_MODELS:-True}
env_file:
- .env
develop:
watch:
- action: sync
path: ./src/backend
target: /workspace/src/backend
extra_hosts:
- "host.docker.internal:host-gateway"
frontend:
depends_on:
- backend
build:
context: .
dockerfile: ./src/frontend/Dockerfile.dev
restart: always
environment:
- NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL:-http://192.168.101.5:8600}
- NEXT_PUBLIC_LOCAL_MODE_ENABLED=${NEXT_PUBLIC_LOCAL_MODE_ENABLED:-true}
ports:
- "3600:3000"
develop:
watch:
- action: sync
path: ./src/frontend
target: /app
ignore:
- node_modules/

Don't change the OLLAMA_HOST in compose. Instead, change it in the .env file.

This is mostly a docker networking issue. You will have to play around with docker networks to provide access to the host networking. Try removing extra_hosts, since your Ollama is on a different device. .

services:
backend:
build:
context: .
dockerfile: ./src/backend/Dockerfile.dev
restart: always
ports:
- "8600:8000"
environment:
- OLLAMA_HOST=${OLLAMA_HOST}
- TAVILY_API_KEY=${TAVILY_API_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- GROQ_API_KEY=${GROQ_API_KEY}
- ENABLE_LOCAL_MODELS=${ENABLE_LOCAL_MODELS:-True}
env_file:
- .env
develop:
watch:
- action: sync
path: ./src/backend
target: /workspace/src/backend
frontend:
depends_on:
- backend
build:
context: .
dockerfile: ./src/frontend/Dockerfile.dev
restart: always
environment:
- NEXT_PUBLIC_API_URL=http://192.168.101.5:8600
- NEXT_PUBLIC_LOCAL_MODE_ENABLED=${NEXT_PUBLIC_LOCAL_MODE_ENABLED:-true}
ports:
- "3600:3000"
develop:
watch:
- action: sync
path: ./src/frontend
target: /app
ignore:
- node_modules/

TAVILY_API_KEY=t
OLLAMA_HOST=http://192.168.101.19:11434

I don't have access to http://192.168.101.19:11434, but can you confirm that Ollama is running on the server?

P.S. Just to let you know, edit history is public on GitHub. You might want to disable your Tavily API key :)