The scope of this Hyperledger Labs project is to support the users (users, developer, etc.) to their work, avoiding to wade through oceans of documents to find information they are looking for. We are implementing an open source conversational AI tool which replies to the questions related to specific context. This is a proof-of-concept software which allows to create a chatbot using Google Colab (or local notebook which requires GPU). Here the official Wiki page: Hyperledger Labs aifaq. Please, read also the Antitrust Policy and the Code of Conduct. The meeting invitation is on the Hyperledger Labs calendar: [Hyperledger Labs] FAQ AI Lab calls.
The system is an open source Jupyter Notebook (derived from here medium.com) which implements an AI chatbot. The idea is to implement an open source framework/template, as example, for other communities. Last results in open LLMs allow to have good performance using common HW resources.
Below the application architecture:
We use RAG (Retrieval Augmented Generation arxiv.org) for question answering use case. That technique aims to improve LLM answers by incorporating knowledge from external database (e.g. vector database).
The image depicts two workflow:
- The data ingestion workflow
- The chat workflow
During the ingestion phase, the system loads context documents and creates a vector database. In our case, the document sources are:
- An online software guide (readthedocs template)
- The GitHub issues and pull requests
After the first phase, the system is ready to reply to user questions.
Currently, we use the open source HuggingFace Zephyr-7b-alpha. But, in the future we want to investigate other open source models. Moreover, the User Interface uses Gradio.
The software is under Apache 2.0 License (please check LICENSE and NOTICE files included). For the dependencies, it is ASF 3rd Party License Policy compliant: the LICENSE file contains "pointers" to the dependency's licenses and a list of Apache 2.0-licensed dependecies (Assembling LICENSE and NOTICE files).
Below the main steps to set up the system:
- Download the hyperledger_aifaq_poc_v3.ipynb notebook file from the src folder
- Create a new Google Colab notebook
- Load the downloaded notebook file
- Set up the runtime GPU
- Set the URL and GitHub repo document sources
- Create a new GitHub personal token
- Add the token, as new secret, to the Google Colab notebook
The first step is straightforward: just click the src folder to open it, then click the hyperledger_aifaq_poc_v3.ipynb file and the click the button below:
Now, in Google Drive click on New button -> Other and Google Colaboratory
Inside the new notebook, select the File menu, then select Load notebook and then click on the "Browse" button and select the downloaded file (hyperledger_aifaq_poc_v3.ipynb).
We need a GPU to execute the notebook. So, we can set it up from the Runtime menu, then change runtime:
If you have a free account you can use only the T4 GPU.
The notebook takes the documents for RAG from two sources:
- An online website
- A GitHub repository
The image below shows how to set them up:
In our case, we get the Hyperledger Iroha readthedocs guide and its GitHub repository (getting issues and pull requests). Into url string we specify the website, while in repo string we set the GitHub repository.\
From your personal GitHub account, inside the profile settings, select the developer settings:
Then select the fine-grained token
and click on the generate button: now copy the token.\ Into the Google Colab notebook, select the secret key and add a new secret, like the image below:
- The token must have the access to the notebook
- The name should be GITHUB_PERSONAL_ACCESS_TOKEN
- Past it inside the Value field
Now, we can test the PoC by executing the notebook: in Google Colab notebook -> Runtime menu, select Execute all:
- It will take 5-15 minutes (it depends on the GPU and the documents)
- When the execution finishes, it loads an UI which allows to ask questions and replies in around 30 seconds
Below an example:
For any reason, please, contact us on Discord Channel:
- Server: Hyperledger
- Channel: #aifaq
That is a proof-of-concept: a list of future improvement below:
- We want to implement a prototype starting from that PoC: container architecture installed on a GPU Cloud Server
- At the same time, we'd like to pass to the next step: the Hyperledger Incubation Stage
- We will investigate other open source models
- Evaluation of the system using standard metrics
- We would like to improve the system, some ideas are: fine-tuning, Corrective RAG, Decomposed LoRA
- Add "guardrails" which are a specific ways of controlling the output of a LLM, such as talking avoid specific topics, responding in a particular way to specific user requests, etc.