This repository contains a collection of basic Python examples utilizing LlamaIndex to showcase various chat interfaces and Retrieval-Augmented Generation (RAG) strategies. Each example is designed to be self-contained and demonstrates a unique aspect of working with RAG and chatbot interfaces.
- Python 3.8+
- LlamaIndex library
- Additional requirements are listed in the
requirements.txt
file.
To set up your environment to run these examples, follow these steps:
git clone https://github.com/leemark/llamaindex_ex.git
cd llamaindex_ex
pip install -r requirements.txt
To run an example, navigate to its directory and execute the Python script:
cd path/to/examples
python hello.py
hello.py
: Demonstrates a basic "Hello, World!" example with an index.hello_persist.py
: Showcases how to create a persistent RAG index.local_models_ollama.py
: Provides an example of using local models with LlamaIndex and ollama.streamlit_interface.py
: A simple Streamlit app interface for interacting with the model.rag_web_page.py
: An example of augmenting a single web page with RAG.rag_web_search_brave.py
: An example of augmenting brave api web search results with RAG.
We welcome contributions! If you'd like to improve an example or add a new one, please open a pull request with your proposed changes.
This project is licensed under the MIT License - see the LICENSE file for details.
If you have any questions or want to discuss the examples further, feel free to reach out.