This code implements a chat-bot that utilizes document retrieval and conversational AI techniques. The chat-bot allows users to ask questions about uploaded documents, and it responds with relevant information from the documents using a combination of vector retrieval and a conversational AI model.
To run the chat-bot, follow these steps:
- Clone the repository:
git clone <repository_url>
- Navigate to the project directory:
cd <project_directory>
- Install dependencies using
pip
:
pip install -r requirements.txt
-
Download the model weights from the following link and place them in the project directory:
-
Run the Streamlit app:
python -m streamlit run .\app.py
This will launch the chat-bot application in your web browser.
-
Upload PDF documents:
- Use the file uploader in the sidebar to upload PDF documents that you want the chat-bot to process.
- Click the "Process" button to extract text and create a vector database for document retrieval.
-
Ask Questions:
- Type your question in the chat input field.
- Click "Send" to submit your question to the chat-bot.
-
Chat-Bot Response:
- The chat-bot will respond with relevant information from the uploaded documents.
- It combines document retrieval with a conversational AI model to provide accurate answers.
All the required dependencies are listed in the requirements.txt
file. You can install them using the following command:
pip install -r requirements.txt
app.py
: Main application code that implements the Streamlit UI, document processing, chat-bot interaction, and conversational AI integration.data/
: Directory where uploaded PDF documents are stored.vectorstore/
: Directory where vector databases are stored.requirements.txt
: List of required Python packages with their versions.
- The
llama-2-7b-chat.ggmlv3.q8_0.bin
model weights should be downloaded from the provided link and placed in the project directory before running the app.