ccwu0918 / aifaq

AI FAQ Proof-of-Concept project: it provides a chatbot that replies to the questions on Hyperledger Ecosystem

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Hyperledger QA PoC version 2

The scope of this Hyperledger Labs project is to support the users (users, developer, etc.) to their work, avoiding to wade through oceans of documents to find information they are looking for. We are implementing an open source conversational AI tool which replies to the questions related to specific context. This is a proof-of-concept software which allows to create a chatbot using Google Colab (or local notebook which requires GPU). Here the official Wiki page: Hyperledger Labs aifaq. Please, read also the Antitrust Policy and the Code of Conduct.

Background

The system is an open source Jupyter Notebook (derived from here medium.com) which implements an AI chatbot. The idea is to implement an open source framework/template, as example, for other communities. Last results in open LLMs allow to have good performance using common HW resources.
Below the application architecture:

LLM chatbot schema

We use RAG (Retrieval Augmented Generation arxiv.org) for question answering use case. That technique aims to improve LLM answers by incorporating knowledge from external database (e.g. vector database).

The image depicts two workflow:

  1. The data ingestion workflow
  2. The chat workflow

During the ingestion phase, the system loads context documents and creates a vector database. In our case, the document sources are:

  • An online software guide (readthedocs template)
  • The GitHub issues and pull requests

After the first phase, the system is ready to reply to user questions.

Currently, we use the open source HuggingFace Zephyr-7b-alpha. But, in the future we want to investigate other open source models. Moreover, the User Interface uses Gradio.

Open Source Version

The software is under Apache 2.0 License (please check LICENSE and NOTICE files included). For the dependencies, it is ASF 3rd Party License Policy compliant: the LICENSE file contains "pointers" to the dependency's licenses and a list of Apache 2.0-licensed dependecies (Assembling LICENSE and NOTICE files).

Installation

Below the main steps to set up the system:

  1. Download the hyperledger_aifaq_poc_v3.ipynb notebook file from the src folder
  2. Create a new Google Colab notebook
  3. Load the downloaded notebook file
  4. Set up the runtime GPU
  5. Set the URL and GitHub repo document sources
  6. Create a new GitHub personal token
  7. Add the token, as new secret, to the Google Colab notebook

The first step is straightforward: just click the src folder to open it, then click the hyperledger_aifaq_poc_v3.ipynb file and the click the button below:

download button

Now, in Google Drive click on New button -> Other and Google Colaboratory

new Google Colab notebook

Inside the new notebook, select the File menu, then select Load notebook and then click on the "Browse" button and select the downloaded file (hyperledger_aifaq_poc_v3.ipynb).

We need a GPU to execute the notebook. So, we can set it up from the Runtime menu, then change runtime:

set up the runtime

If you have a free account you can use only the T4 GPU.

The notebook takes the documents for RAG from two sources:

  1. An online website
  2. A GitHub repository

The image below shows how to set them up:

document sources

In our case, we get the Hyperledger Iroha readthedocs guide and its GitHub repository (getting issues and pull requests). Into url string we specify the website, while in repo string we set the GitHub repository.\

From your personal GitHub account, inside the profile settings, select the developer settings:

developer settings

Then select the fine-grained token

fine-grained token

and click on the generate button: now copy the token.\ Into the Google Colab notebook, select the secret key and add a new secret, like the image below:

github personal token

  • The token must have the access to the notebook
  • The name should be GITHUB_PERSONAL_ACCESS_TOKEN
  • Past it inside the Value field

Usage

Now, we can test the PoC by executing the notebook: in Google Colab notebook -> Runtime menu, select Execute all:

  • It will take 5-15 minutes (it depends on the GPU and the documents)
  • When the execution finishes, it loads an UI which allows to ask questions and replies in around 30 seconds

Below an example:

UI Gradio example

For any reason, please, contact us on Discord Channel:

  • Server: Hyperledger
  • Channel: #aifaq

Current version notes

That is a proof-of-concept: a list of future improvement below:

  1. We want to implement a prototype starting from that PoC: container architecture installed on a GPU Cloud Server
  2. At the same time, we'd like to pass to the next step: the Hyperledger Incubation Stage
  3. We will investigate other open source models
  4. Evaluation of the system using standard metrics
  5. We would like to improve the system, some ideas are: fine-tuning, Corrective RAG, Decomposed LoRA
  6. Add "guardrails" which are a specific ways of controlling the output of a LLM, such as talking avoid specific topics, responding in a particular way to specific user requests, etc.

About

AI FAQ Proof-of-Concept project: it provides a chatbot that replies to the questions on Hyperledger Ecosystem

License:Other


Languages

Language:Jupyter Notebook 100.0%