lnj4e / txtai

πŸ’‘ Semantic search and workflows powered by language models

Home Page:https://neuml.github.io/txtai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Semantic search and workflows powered by language models

Version GitHub last commit GitHub issues Join Slack Build Status Coverage Status


txtai is an open-source platform for semantic search and workflows powered by language models.

demo

Traditional search systems use keywords to find data. Semantic search has an understanding of natural language and identifies results that have the same meaning, not necessarily the same keywords.

search search

txtai builds embeddings databases, which are a union of vector indexes and relational databases. This enables vector search with SQL. Embeddings databases can stand on their own and/or serve as a powerful knowledge source for large language model (LLM) prompts.

Semantic workflows connect language models together to build intelligent applications.

flows flows

Integrate conversational search, retrieval augmented generation (RAG), LLM chains, automatic summarization, transcription, translation and more.

Summary of txtai features:

  • πŸ”Ž Vector search with SQL, object storage, topic modeling, graph analysis, multiple vector index backends (Faiss, Annoy, Hnswlib) and support for external vector databases
  • πŸ“„ Create embeddings for text, documents, audio, images and video
  • πŸ’‘ Pipelines powered by language models that run LLM prompts, question-answering, labeling, transcription, translation, summarizations and more
  • β†ͺ️️ Workflows to join pipelines together and aggregate business logic. txtai processes can be simple microservices or multi-model workflows.
  • βš™οΈ Build with Python or YAML. API bindings available for JavaScript, Java, Rust and Go.
  • ☁️ Cloud-native architecture that scales out with container orchestration systems (e.g. Kubernetes)

txtai is built with Python 3.8+, Hugging Face Transformers, Sentence Transformers and FastAPI

The following applications are powered by txtai.

apps

Application Description
txtchat Conversational search and workflows for all
paperai Semantic search and workflows for medical/scientific papers
codequestion Semantic search for developers
tldrstory Semantic search for headlines and story text

In addition to this list, there are also many other open-source projects, published research and closed proprietary/commercial projects that have built on txtai in production.

Why txtai?

why why

New vector databases, LLM frameworks and everything in between are sprouting up daily. Why build with txtai?

  • Up and running in minutes with pip or Docker
# Get started in a couple lines
from txtai.embeddings import Embeddings

embeddings = Embeddings()
embeddings.index(["Correct", "Not what we hoped"])
embeddings.search("positive", 1)
#[(0, 0.29862046241760254)]
  • Built-in API makes it easy to develop applications using your programming language of choice
# app.yml
embeddings:
    path: sentence-transformers/all-MiniLM-L6-v2
CONFIG=app.yml uvicorn "txtai.api:app"
curl -X GET "http://localhost:8000/search?query=positive"
  • Run local - no need to ship data off to disparate remote services
  • Work with micromodels all the way up to large language models (LLMs)
  • Low footprint - install additional dependencies and scale up when needed
  • Learn by example - notebooks cover all available functionality

Installation

install install

The easiest way to install is via pip and PyPI

pip install txtai

Python 3.8+ is supported. Using a Python virtual environment is recommended.

See the detailed install instructions for more information covering optional dependencies, environment specific prerequisites, installing from source, conda support and how to run with containers.

Examples

examples examples

An abbreviated list of example notebooks and applications giving an overview of txtai are shown below. See the documentation for the full set of examples.

Semantic Search

Build semantic/similarity/vector/neural search applications.

Notebook Description
Introducing txtai ▢️ Overview of the functionality provided by txtai Open In Colab
Build an Embeddings index with Hugging Face Datasets Index and search Hugging Face Datasets Open In Colab
Add semantic search to Elasticsearch Add semantic search to existing search systems Open In Colab
Semantic Graphs Explore topics, data connectivity and run network analysis Open In Colab
Embeddings in the Cloud Load and use an embeddings index from the Hugging Face Hub Open In Colab
Customize your own embeddings database Ways to combine vector indexes with relational databases Open In Colab

LLM

Prompt-driven search, retrieval augmented generation (RAG), pipelines and workflows that interface with large language models (LLMs).

Notebook Description
Prompt-driven search with LLMs Embeddings-guided and Prompt-driven search with Large Language Models (LLMs) Open In Colab
Prompt templates and task chains Build model prompts and connect tasks together with workflows Open In Colab

Pipelines

Transform data with language model backed pipelines.

Notebook Description
Extractive QA with txtai Introduction to extractive question-answering with txtai Open In Colab
Apply labels with zero shot classification Use zero shot learning for labeling, classification and topic modeling Open In Colab
Building abstractive text summaries Run abstractive text summarization Open In Colab
Extract text from documents Extract text from PDF, Office, HTML and more Open In Colab
Text to speech generation Generate speech from text Open In Colab
Transcribe audio to text Convert audio files to text Open In Colab
Translate text between languages Streamline machine translation and language detection Open In Colab
Generate image captions and detect objects Captions and object detection for images Open In Colab

Workflows

Efficiently process data at scale.

Notebook Description
Run pipeline workflows ▢️ Simple yet powerful constructs to efficiently process data Open In Colab
Workflow Scheduling Schedule workflows with cron expressions Open In Colab
Push notifications with workflows Generate and push notifications with workflows Open In Colab

Model Training

Train NLP models.

Notebook Description
Train a text labeler Build text sequence classification models Open In Colab
Train a QA model Build and fine-tune question-answering models Open In Colab
Train a language model from scratch Build new language models Open In Colab

Applications

Series of example applications with txtai. Links to hosted versions on Hugging Face Spaces also provided.

Application Description
Basic similarity search Basic similarity search example. Data from the original txtai demo. πŸ€—
Baseball stats Match historical baseball player stats using vector search. πŸ€—
Book search Book similarity search application. Index book descriptions and query using natural language statements. Local run only
Image search Image similarity search application. Index a directory of images and run searches to identify images similar to the input query. πŸ€—
Summarize an article Summarize an article. Workflow that extracts text from a webpage and builds a summary. πŸ€—
Wiki search Wikipedia search application. Queries Wikipedia API and summarizes the top result. πŸ€—
Workflow builder Build and execute txtai workflows. Connect summarization, text extraction, transcription, translation and similarity search pipelines together to run unified workflows. πŸ€—

Model guide

models

See the table below for the current recommended models. These models all allow commercial use and offer a blend of speed and performance.

Component Model(s)
Embeddings all-MiniLM-L6-v2
E5-base-v2
Image Captions BLIP
Labels - Zero Shot BART-Large-MNLI
Labels - Fixed Fine-tune with training pipeline
Large Language Model (LLM) Flan T5 XL
Falcon 7B Instruct
Summarization DistilBART
Text-to-Speech ESPnet JETS
Transcription Whisper
Translation OPUS Model Series

Models can be loaded as either a path from the Hugging Face Hub or a local directory. Model paths are optional, defaults are loaded when not specified. For tasks with no recommended model, txtai uses the default models as shown in the Hugging Face Tasks guide.

See the following links to learn more.

Documentation

Full documentation on txtai including configuration settings for embeddings, pipelines, workflows, API and a FAQ with common questions/issues is available.

Further Reading

further further

Contributing

For those who would like to contribute to txtai, please see this guide.

About

πŸ’‘ Semantic search and workflows powered by language models

https://neuml.github.io/txtai

License:Apache License 2.0


Languages

Language:Python 99.4%Language:Dockerfile 0.5%Language:Makefile 0.1%