rsrohan99 / rag-stream-intermediate-events-tutorial

Tutorial on how to properly send intermediate LlamaIndex events to vercel ai sdk via server-sent events during RAG.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

In this tutorial, we'll see how to use LlamaIndex Instrumentation module to send intermediate steps in a RAG pipeline to the frontend for an intuitive user experience.

Full video tutorial under 3 minutes 🔥👇

Stream Intermediate events in RAG

We use Server-Sent Events which will be recieved by Vercel AI SDK on the frontend.

Getting Started

First clone the repo:

git clone https://github.com/rsrohan99/rag-stream-intermediate-events-tutorial.git

cd rag-stream-intermediate-events-tutorial

Start the Backend

cd into the backend directory

cd backend

First create .env from .env.example

cp .env.example .env

Set the OpenAI key in .env

OPENAI_API_KEY=****

Install the dependencies

poetry install

Generate the Index for the first time

poetry run python app/engine/generate.py

Start the backend server

poetry run python main.py

Start the Frontend

cd into the frontend directory

cd frontend

First create .env from .env.example

cp .env.example .env

Install the dependencies

npm i

Start the frontend server

npm run dev

About

Tutorial on how to properly send intermediate LlamaIndex events to vercel ai sdk via server-sent events during RAG.


Languages

Language:TypeScript 69.3%Language:Python 23.7%Language:CSS 5.1%Language:JavaScript 1.9%