Kerlinn / farfalle

open-source answer engine - run local or cloud models

Home Page:https://www.farfalle.dev/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Farfalle

Open-source AI-powered search engine. Run your own local LLM or use the cloud.

Demo answering questions with llama3 on my M1 Macbook Pro:

local-demo.mp4

πŸ’» Live Demo

farfalle.dev (Cloud models only)

πŸ“– Overview

πŸ›£οΈ Roadmap

  • Add support for local LLMs through Ollama
  • Docker deployment setup

πŸ› οΈ Tech Stack

πŸƒπŸΏβ€β™‚οΈ Getting Started

Prerequisites

  • Docker
  • Ollama
    • Download any of the supported models: llama3, mistral, gemma
    • Start ollama server ollama serve

Get API Keys

1. Clone the Repo

git clone git@github.com:rashadphz/farfalle.git
cd farfalle

2. Add Environment Variables

touch .env

Add the following variables to the .env file:

Required

TAVILY_API_KEY=...

Optional

# Cloud Models
OPENAI_API_KEY=...
GROQ_API_KEY=...

# Rate Limit
RATE_LIMIT_ENABLED=
REDIS_URL=...

# Logging
LOGFIRE_TOKEN=...

Optional Variables (Pre-configured Defaults)

# API URL
NEXT_PUBLIC_API_URL=http://localhost:8000

# Local Models
NEXT_PUBLIC_LOCAL_MODE_ENABLED=true
ENABLE_LOCAL_MODELS=True

3. Run Containers

This requires Docker Compose version 2.22.0 or later.

docker-compose -f docker-compose.dev.yaml up -d

Visit http://localhost:3000 to view the app.

πŸš€ Deploy

Backend

Deploy to Render

After the backend is deployed, copy the web service URL to your clipboard. It should look something like: https://some-service-name.onrender.com.

Frontend

Use the copied backend URL in the NEXT_PUBLIC_API_URL environment variable when deploying with Vercel.

Deploy with Vercel

And you're done! πŸ₯³

About

open-source answer engine - run local or cloud models

https://www.farfalle.dev/

License:Apache License 2.0


Languages

Language:TypeScript 84.7%Language:Python 13.4%Language:CSS 1.2%Language:JavaScript 0.7%