sahandv / LLM-local-RAG

A locally-hosted Retrieval Augmented Generation pipeline for querying a Large Language Model on YOUR documents.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LLM-local-RAG

A locally-hosted Retrieval Augmented Generation pipeline for querying a Large Language Model on YOUR documents.

Based on the framework by Local-rag-Example with Ollama, LangChain and Streamlit.

Install and Run Ollama

Visit and download https://ollama.com/download

Unzip Ollama.app and move to Applications folder.

Open a terminal and execute:

ollama pull llama3.1
ollama serve

Clone repo and setup dependencies

git clone https://github.com/Sydney-Informatics-Hub/LLM-local-RAG/
cd LLM-local-RAG
conda create -n localrag python=3.11 pip
conda activate localrag
pip install langchain streamlit streamlit_chat chromadb fastembed pypdf langchain_community cryptography

Run the Frontend

streamlit run app.py

About

A locally-hosted Retrieval Augmented Generation pipeline for querying a Large Language Model on YOUR documents.

License:MIT License


Languages

Language:Python 100.0%