stephen37 / ollama_local_rag

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Local RAG Application with Ollama, Langchain, and Milvus

This repository contains code for running local Retrieval Augmented Generation (RAG) applications. It uses Ollama for LLM operations, Langchain for orchestration, and Milvus for vector storage, it is using Llama3 for the LLM.

Prerequisites

Before running this project, ensure you have the following installed:

  • Python 3.11 or later
  • Docker
  • Docker-Compose

Additionally, you will need:

  • An API key from Jina AI, which you can obtain here.

Installation

  1. Clone this repository to your local machine:
git clone git@github.com:stephen37/ollama_local_rag.git
cd ollama_local_rag
  1. Install dependencies
poetry install
  1. Start Milvus with Docker
docker-compose up -d

Usage

To run the different applications, execute the following command in your terminal:

python <file_name.py>

You will be prompted to enter queries, and the system will retrieve relevant answers based on the data processed.

For example, if you want to interact with the data from the French parliament, you can run python rag_french_parliament.py


Feel free to check out Milvus, and share your experiences with the community by joining our Discord.

About


Languages

Language:Python 100.0%