kylejeske / LLocalSearch

LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LLocalSearch

Important

Hi people, i just woke up to 1k stars haha. I really appreciate the interest in my project, but its going to take some time for me to go through everything. Hope you understand :) Please open Issues (not dms / emails), i promise i will get to them asap

Docker / Networking things are a bit bumpy at the moment, please just open an issue if you think, that something should work but isn't working. There are a lot of different host system configurations out there, and i can't test them by myself.

What it is

LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed.

Now with follow-up questions:

demo.mp4

image

Features

  • ๐Ÿ•ต๏ธ Completely local (no need for API keys)
  • ๐Ÿ’ธ Runs on "low end" LLM Hardware (demo video uses a 7b model)
  • ๐Ÿค“ Progress logs, allowing for a better understanding of the search process
  • ๐Ÿค” Follow-up questions
  • ๐Ÿ“ฑ Mobile friendly interface
  • ๐Ÿš€ Fast and easy to deploy with Docker Compose
  • ๐ŸŒ Web interface, allowing for easy access from any device
  • ๐Ÿ’ฎ Handcrafted UI with light and dark mode

Status

This project is still in its very early days. Expect some bugs.

How it works

Please read infra to get the most up-to-date idea.

Self-hosting & Development

Requirements

  • A running Ollama server, reachable from the container
    • GPU is not needed, but recommended
  • Docker Compose

Run the latest release

Recommended, if you don't intend to develop on this project.

git clone https://github.com/nilsherzig/LLocalSearch.git
cd ./LLocalSearch
# ๐Ÿ”ด check the env vars inside the compose file (and `env-example` file) and change them if needed
docker-compose up 

๐ŸŽ‰ You should now be able to open the web interface on http://localhost:3000. Nothing else is exposed by default.

Run the current git version

Newer features, but potentially less stable.

git clone https://github.com/nilsherzig/LLocalsearch.git
# 1. make sure to check the env vars inside the `docker-compose.dev.yaml`.
# 2. Make sure you've really checked the dev compose file not the normal one.

# 3. build the containers and start the services
make dev 
# Both front and backend will hot reload on code changes. 

If you don't have make installed, you can run the commands inside the Makefile manually.

Now you should be able to access the frontend on http://localhost:3000.

stars

Kinda looks like im botting haha

Star History Chart

About

LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed.

License:Apache License 2.0


Languages

Language:Go 51.5%Language:Svelte 38.7%Language:Makefile 2.7%Language:TypeScript 2.6%Language:JavaScript 1.9%Language:Dockerfile 1.9%Language:HTML 0.6%Language:CSS 0.1%