FingerLiu / open-assistant-api

The Open Assistant API is a ready-to-use, open-source, self-hosted agent/gpts orchestration creation framework, supporting customized extensions for LLM, RAG, function call, and tools capabilities. It also supports seamless integration with the openai/langchain sdk.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Open Assistant API

✨ An out-of-the-box AI intelligent assistant API ✨

English | 简体中文

Introduction

Open Assistant API is an open-source, self-hosted AI intelligent assistant API, compatible with the official OpenAI interface. It can be used directly with the official OpenAI Client to build LLM applications.

It supports One API for integration with more commercial and private models.

Usage

Below is an example of using the official OpenAI Python openai library:

import openai

client = openai.OpenAI(
    base_url="http://127.0.0.1:8086/api/v1",
    api_key="xxx"
)

assistant = client.beta.assistants.create(
    name="demo",
    instructions="You are a helpful assistant.",
    model="gpt-4-1106-preview"
)

Why Choose Open Assistant API

Feature Open Assistant API OpenAI Assistant API
Ecosystem Strategy Open Source Closed Source
RAG Engine Simple Implementation Supported
Internet Search Supported Not Supported
Custom Functions Supported Supported
Built-in Tool Extendable Not Extendable
Code Interpreter Under Development Supported
LLM Support Supports More LLMs Only GPT
Message Streaming Output Supports Not Supported
Local Deployment Supported Not Supported
  • LLM Support: Compared to the official OpenAI version, more models can be supported by integrating with One API.
  • Tool: Currently supports online search; can easily expand more tools.
  • RAG Engine: The currently supported file types are txt, pdf, html, markdown. We provide a preliminary implementation.
  • Message Streaming Output: Support message streaming output for a smoother user experience.
  • Ecosystem Strategy: Open source, you can deploy the service locally and expand the existing features.

Quick Start

The easiest way to start the Open Assistant API is to run the docker-compose.yml file. Make sure Docker and Docker Compose are installed on your machine before running.

Configuration

Go to the project root directory, open docker-compose.yml, fill in the openai api_key and bing search key (optional).

# openai api_key (supports OneAPI api_key)
OPENAI_API_KEY=<openai_api_key>

# bing search key (optional)
BING_SUBSCRIPTION_KEY=<bing_subscription_key>

Run

Run with Docker Compose:

docker compose up -d

Access API

Api Base URL: http://127.0.0.1:8086/api/v1

Interface documentation address: http://127.0.0.1:8086/docs

Complete Usage Example

In this example, an AI assistant is created and run using the official OpenAI client library, including two built-in tools, web_search and retrieval, and a custom function. Before running, you need to run pip install openai to install the Python openai library.

# !pip install openai
python tests/e2e/index.py

Community and Support

  • Join the Slack channel to see new releases, discuss issues, and participate in community interactions.

  • Join the Discord channel to interact with other community members.

  • Join the WeChat group:

Special Thanks

We mainly referred to and relied on the following projects:

Contributing

Please read our contribution document to learn how to contribute.

Open Source License

This repository follows the MIT open source license. For more information, please see the LICENSE file.

About

The Open Assistant API is a ready-to-use, open-source, self-hosted agent/gpts orchestration creation framework, supporting customized extensions for LLM, RAG, function call, and tools capabilities. It also supports seamless integration with the openai/langchain sdk.

License:MIT License


Languages

Language:Python 98.6%Language:Mako 0.5%Language:Dockerfile 0.4%Language:Makefile 0.2%Language:Shell 0.2%