readmecode / GPT-Swarm

GPT-Swarm is an open-source project that harnesses the power of swarm intelligence to enhance the capabilities of state-of-the-art language models. By leveraging collective problem-solving and distributed decision-making, GPT-Swarm creates a robust, adaptive, and scalable framework for tackling complex tasks across various domains.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

GPT-Swarm

License Python Version Discord Follow

Project logo

Videos submission for the lablab hackathon
Simple frontend demo

GPT-Swarm is a groundbreaking project that combines swarm intelligence and advanced language models to tackle complex tasks across diverse domains. Our innovative framework is robust, adaptive, and scalable, outperforming single models by leveraging the power of collective problem-solving and distributed decision-making. Not to mention the lightning-fast speed with which it performs the research.

Table of Contents

Why

What Swarm Intelligence Is

GPT-Swarm is inspired by the principles of emergence. In nature, when you allow simple agents to interract with each other, they show fundamentally new capabilities. Typical examples are bee or ant hives, or even countries and cultures.

Uprecedented Scalability and Diversity

You can add any models with any capabilities to the swarm and make them work together with each other.

Adaptive Intelligence without Retraining

By utilizing shared vector-based memory, and giving the swarm the ability to adjust itself and it's behavior, we achieve similar adaptability as in reinforcement learning, but without expensive retraining of base-models.

What it can do

Swarm is the only intelligence system to date that can effectively do complex tasks like performing market research or generating whole software solutions.

References:

Architecture overview

Project diagram

Installation and Usage

  1. First, you need to create a keys.json file in the root foler. GOOGLE_API_KEY and CUSTOM_SEARCH_ENGINE_ID are needed for the models to be able to use google search.

    {
        "OPENAI_API_KEY": "sk-YoUrKey",
        "GOOGLE_API_KEY": "blablablaapiKey", 
        "CUSTOM_SEARCH_ENGINE_ID": "12345678aa25"
    }
  2. Then you can specify the swarm configuration and tasks in
    swarm_config.yaml

  3. After that, to run the swarm simply run

    # On Linux or Mac:
    ./run.sh start
    # On Windows:
    .\run.bat
  4. After some time the swarm will start producing a report and storing it in the run folder. By default it's in:
    ./tmp/swarm/output.txt and ./tmp/swarm/output.json

Advanced usage

  1. If you are brave, you can go though the logs. Be careful, because the swarm produces incredible amount of data very fast. You can find logs in ./tmp/swarm/swarm.json. You can also use ./tests/_explore_logs.ipynb to more easily digest the logs.

  2. The shared memory in the run is persistent. You can ask additional questions to this memory using ./tests/_task_to_vdb.ipynb

Next-ups

  • make adding new models as easy as possible, including custom deployed ones like llama
  • multi-key support for higher scalability

🚧 Docker

Build Multi-Arch image:

docker buildx build --platform linux/amd64,linux/arm64 --tag gpt-swarm/gpt-swarm:0.0.0 .

How to Contribute

  • follow the SOLID principles and don't break the abstractions
  • create bite-sized PRs

About

GPT-Swarm is an open-source project that harnesses the power of swarm intelligence to enhance the capabilities of state-of-the-art language models. By leveraging collective problem-solving and distributed decision-making, GPT-Swarm creates a robust, adaptive, and scalable framework for tackling complex tasks across various domains.

License:Apache License 2.0


Languages

Language:Jupyter Notebook 54.8%Language:Python 44.4%Language:Dockerfile 0.6%Language:Batchfile 0.1%Language:Shell 0.1%