destrex271 / llmqueue

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LLMQueue

Just a simple experiment to see if I can get postgres to work as a message queue and a vector db to run with some open soruce llm :)

What do you need to run this locally for now?

  • Ollama Docker Image
    docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama #(Well if you have a gpu ;))
     # OR
    docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Things added till now

  • A layer over pgmq, just simple functions to add and get data from a queue
  • An interface over Llama as LlamaInstance struct, can generate responses to pormpts

TBD

  • Put the entire queueing system behind an async server
  • Develop a frontend to interact with multiple bots

About


Languages

Language:Rust 100.0%