lalashiwoya / ChatBot_Design_With_Langchain_Router_Chain

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Chatbot Project

Environment Setup

  1. Create and activate a Conda environment:
    conda create -y -n chatbot python=3.11
    conda activate chatbot
  2. Install required packages:
    pip install -r requirements.txt
  3. Set up environment variables:
    • Write your OPENAI_API_KEY in the .env file. A template can be found in .env.example.
    source .env

Running the Application

To start the application, use the following command:

chainlit run app.py

Router Chain Implementation

Setting Panel

Features

User Setting Panel

Users have the option to select the specific LLM (language learning model) they prefer for generating responses. The switch between different LLMs can be accomplished within a single conversation session.

Setting Panel

QA with RAG

  • Various Information Source: The chatbot can retrieve information from web pages, YouTube videos, and PDFs.
  • Source Display: You can view the source of the information at the end of each answer.
  • LLM Model Identification: The specific LLM model utilized for generating the current response is indicated.
  • Router retriever: Easy to adapt to different domains, as each domain can be equipped with a different retriever.

Conversation Memory

  • Memory Management: The chatbot is equipped with a conversation memory feature. If the memory exceeds 500 tokens, it is automatically summarized.

Langsmith Evaluation

To evaluate model generation against human references or log outputs for specific test queries, use Langsmith.

  1. Register an account at Langsmith.
  2. Add your LANGCHAIN_API_KEY to the .env file.
  3. Execute the script with your dataset name:
    python langsmith_tract.py --dataset_name <YOUR DATASET NAME>
  4. Modify the data path in langsmith_evaluation/config.toml if necessary (e.g., path to a CSV file with question and answer pairs).

Langsmith

Recording Human Feedback with Literal AI

Use Literal AI to record human feedback for each generated answer. Follow these steps:

  1. Register an account at Literal AI.
  2. Add your LITERAL_API_KEY to the .env file.
  3. Once the LITERAL_API_KEY is added to your environment, run the command chainlit run app.py. You will see three new icons as shown in the image below, where you can leave feedback on the generated answers.

Literal_AI_Web

  1. Track this human feedback in your Literal AI account. You can also view the prompts or intermediate steps used to generate these answers.

Literal_AI_Web

User Authentication and User Past Chat Setup

This guide details the steps for setting up user authentication in your application. Each authenticated user will have the ability to view their own past interactions with the chatbot.

  1. Add your APP_LOGIN_USERNAME and APP_LOGIN_PASSWORD to the .env file.
  2. Run the following command to create a secret which is essential for securing user sessions:
    chainlit create-secret
    Copy the outputted CHAINLIT_AUTH_SECRET and add it to your .env file
  3. Once you launch your application, you will see a login authentication page
  4. Login with your APP_LOGIN_USERNAME and APP_LOGIN_PASSWORD
  5. Upon successful login, each user will be directed to a page displaying their personal chat history with the chatbot.

Web Page

Presentation

Below is a preview of the web interface for the chatbot:

Web Page

Configuration

To customize the chatbot according to your needs, define your configurations in the config.toml file.

About


Languages

Language:Python 82.2%Language:Jupyter Notebook 17.8%