gaocegege / qtext

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

QText

Python Check discord invitation link trackgit-views

End-to-end service to query the text with hybrid search and rerank.

Application scenarios:

  • Personal knowledge database + search engine
  • Rerank experiment and visualization
  • RAG pipeline

asciicast

Features

  • full text search (Postgres GIN + text search)
  • vector similarity search (pgvecto.rs HNSW)
  • sparse search (pgvecto.rs HNSW)
  • generate vector and sparse vector if not provided
  • reranking
  • semantic highlight
  • hybrid search explanation
  • TUI
  • OpenAPI
  • OpenMetrics
  • filtering

How to use

To start all the services with docker compose:

docker compose -f docker/compose.yaml up -d server

Some of the dependent services can be opt-out:

  • emb: used to generate embedding for query and documents
  • sparse: used to generate sparse embedding for query and documents (this requires a HuggingFace token that signed the agreement for prithivida/Splade_PP_en_v1)
  • highlight: used to provide the semantic highlight feature
  • encoder: rerank with cross-encoder model, you can choose other methods or other online services

For the client example, check:

API

We provide a simple sync/async client. You can also refer to the OpenAPI and build your own client.

  • /api/namespace POST: create a new namespace and configure the index
  • /api/doc POST: add a new doc
  • /api/query POST: query the docs
  • /api/highlight POST: semantic highlight
  • /metrics GET: open metrics

Check the OpenAPI documentation for more information (this requires the qtext service).

Terminal UI

We provide a simple terminal UI powered by Textual for you to interact with the service.

pip install textual
# need to run the qtext service first
python tui/main.py $QTEXT_PORT

Configurations

Check the config.py for more detail. It will read the $HOME/.config/qtext/config.json if this file exists.

Integrate to the RAG pipeline

This project has most of the components you need for the RAG except for the last LLM generation step. You can send the retrieval + reranked docs to any LLM providers to get the final result.

Customize the table schema

Note

If you already have the table in Postgres, you will be responsible for the text-indexing and vector-indexing part.

  1. Define a dataclass that includes the necessary columns as class attributes
    • annotate the primary_key, text_index, vector_index, sparse_index with metadata (not all of them are required, only the necessary ones)
    • attributes without default value or default factory is treated as required when you add new docs
  2. Implement the to_record and from_record methods to be used in the reranking stage
  3. Change the config.vector_store.schema to the class you have defined

Check the schema.py for more details.

About

License:Apache License 2.0


Languages

Language:Python 89.9%Language:Dockerfile 9.8%Language:Makefile 0.3%