random attempts in learning llamaindex
- rag_1.py - use the start example with my own documents in markdown
- rag_2.py - Customize LLM and prompts
- rag_3.py - Use a vector database (Qdrant) + Data ingestion
- rag_4.py - Create the query engine in a slightly lower level way
- rag_5.py - Instrumentation and customized event handler
- rag_6.py - Use an embedding model on HuggingFace
- rag_7.py - Use different chunking/indexing strategies
- rag_8.py - Use IngestionPipeline to parse doc and feed nodes into vector db
- rag_bot_1 - The most simple RAG chat app using Streamlit as UI