Team3256 / WPILib-Copilot

Your WPILib Pair Programmer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

WPILib-Copilot

This repo is an implementation of a locally hosted chatbot specifically focused on question answering over the WPILib documentation. Built with LangChain and FastAPI.

The app leverages LangChain's streaming support and async API to update the page in real time for multiple users.

โœ… Running locally

  1. Install dependencies: pip install -r requirements.txt
  2. Run ingest.sh to ingest WPILib docs data into the vectorstore (only needs to be done once).
  3. Run the app: make start
    1. To enable tracing, make sure langchain-server is running locally and pass tracing=True to get_chain in main.py. You can find more documentation here.
  4. Open localhost:9000 in your browser.

๐Ÿ“š Technical description

There are two components: ingestion and question-answering.

Ingestion has the following steps:

  1. Pull html from documentation site
  2. Load html with LangChain's ReadTheDocs Loader
  3. Split documents with LangChain's TextSplitter
  4. Create a vectorstore of embeddings, using LangChain's vectorstore wrapper (with OpenAI's embeddings and FAISS vectorstore).

Question-Answering has the following steps, all handled by ConversationalRetrievalChain:

  1. Given the chat history and new user input, determine what a standalone question would be (using ChatGPT).
  2. Given that standalone question, look up relevant documents from the vectorstore.
  3. Pass the standalone question and relevant documents to ChatGPT to generate a final answer.

About

Your WPILib Pair Programmer

License:GNU General Public License v3.0


Languages

Language:Python 56.2%Language:HTML 41.4%Language:Shell 1.7%Language:Makefile 0.8%