Jocker-123 / ChatQ

Local Retrieval-Augmented Generation (RAG) system built with FastAPI, integrating vector search, Elasticsearch, and optional web search to power LLM-based intelligent question answering using models like Mistral or GPT-4.

Repository from Github https://github.comJocker-123/ChatQRepository from Github https://github.comJocker-123/ChatQ

# ChatQ 🌐🤖

![ChatQ Logo](https://img.shields.io/badge/ChatQ-FastAPI-orange?style=flat-square)
![Release](https://img.shields.io/github/v/release/Jocker-123/ChatQ?style=flat-square)
![License](https://img.shields.io/badge/license-MIT-brightgreen)

Welcome to **ChatQ**, a Local Retrieval-Augmented Generation (RAG) system. This project leverages FastAPI and integrates cutting-edge technologies like vector search, Elasticsearch, and optional web search. With ChatQ, you can harness the power of large language models (LLMs) such as Mistral or GPT-4 for intelligent question answering.

---

## Table of Contents

- [Features](#features)
- [Technologies Used](#technologies-used)
- [Getting Started](#getting-started)
- [Usage](#usage)
- [Contributing](#contributing)
- [License](#license)
- [Contact](#contact)

---

## Features

- **Intelligent Question Answering**: Utilize LLMs to answer complex queries.
- **Local RAG Implementation**: Combine local data retrieval with generative models for effective responses.
- **FastAPI Framework**: Built on FastAPI for rapid deployment and high performance.
- **Vector Search**: Implement vector-based search for precise information retrieval.
- **Integration with Elasticsearch**: Leverage Elasticsearch for advanced search capabilities.
- **Web Search Option**: Include an optional web search for real-time data access.

---

## Technologies Used

ChatQ combines various technologies to achieve its goals:

- **FastAPI**: A modern, fast (high-performance) web framework for building APIs with Python 3.7+.
- **Elasticsearch**: A distributed, RESTful search and analytics engine designed for horizontal scalability.
- **Vector Databases**: Store and retrieve vectors efficiently for semantic search.
- **LLMs**: Utilize models like Mistral and GPT-4 for natural language understanding.
- **Langchain**: A framework for building applications powered by LLMs.

---

## Getting Started

To get started with ChatQ, you will need to clone the repository and install the necessary dependencies.

### Prerequisites

- Python 3.7 or higher
- pip (Python package installer)

### Installation

1. Clone the repository:

   ```bash
   git clone https://github.com/Jocker-123/ChatQ.git
  1. Navigate to the project directory:

    cd ChatQ
  2. Install the required packages:

    pip install -r requirements.txt
  3. Start the FastAPI server:

    uvicorn main:app --reload

Your ChatQ server should now be running at http://localhost:8000.


Usage

Once the server is up, you can start making requests to the API.

Example Request

You can use tools like Postman or cURL to interact with the API. Here’s a simple example using cURL:

curl -X POST "http://localhost:8000/ask" -H "Content-Type: application/json" -d '{"question": "What is Retrieval-Augmented Generation?"}'

Example Response

The API will return a JSON response with the answer to your question:

{
  "answer": "Retrieval-Augmented Generation (RAG) is a framework that combines retrieval and generation techniques to answer questions more effectively."
}

Contributing

We welcome contributions to ChatQ! If you want to contribute, please follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or fix.
  3. Make your changes.
  4. Submit a pull request.

Please ensure your code adheres to the existing style and includes tests where applicable.


License

This project is licensed under the MIT License. See the LICENSE file for details.


Contact

For any inquiries or suggestions, feel free to reach out:


Releases

For the latest updates and releases, please check the Releases section.

ChatQ Release Button


Acknowledgements

We would like to thank the developers of FastAPI, Elasticsearch, and the contributors of the libraries used in this project.


Additional Resources


Feel free to explore the code and contribute to the development of ChatQ. Your feedback and suggestions are invaluable as we continue to improve this project.


ChatQ Banner

About

Local Retrieval-Augmented Generation (RAG) system built with FastAPI, integrating vector search, Elasticsearch, and optional web search to power LLM-based intelligent question answering using models like Mistral or GPT-4.

License:Other


Languages

Language:HTML 50.7%Language:Python 49.3%