0xspeedrunner / banana-sd-base

Stable Diffusion base with multiple models, pipelines & schedulers, for Banana

Home Page:https://kiri.art/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

docker-diffusers-api ("banana-sd-base")

Diffusers / Stable Diffusion in docker with a REST API, supporting various models, pipelines & schedulers. Used by kiri.art, perfect for banana.dev.

Copyright (c) Gadi Cohen, 2022. MIT Licensed. Please give credit and link back to this repo if you use it in a public project.

Features

  • Pipelines: txt2img, img2img and inpainting in a single container
  • Models: stable-diffusion, waifu-diffusion, and easy to add others (e.g. jp-sd)
  • All model inputs supported, including setting nsfw filter per request
  • Permute base config to multiple forks based on yaml config with vars
  • Optionally send signed event logs / performance data to a REST endpoint
  • Can automatically download a checkpoint file and convert to diffusers.

Note: This image was created for kiri.art. Everything is open source but there may be certain request / response assumptions. If anything is unclear, please open an issue.

Usage:

  1. Clone or fork this repo.

  2. Variables:

    1. EITHER:
      1. Set in DOWNLOAD_VARS.py, APP_VARS.py and Dockerfile;
    2. OR:
      1. Set HF_AUTH_TOKEN environment variable,
      2. Edit scripts/permutations.yaml,
      3. Run scripts/permute.sh to create a bunch of distinct forks.
  3. Dev mode:

    1. Leave MODEL_ID as ALL and all models will be downloaded, allowing you to switch at request time (great for dev, useless for serverless).
    2. Set HF_AUTH_TOKEN environment var and run docker build -t banana-sd --build-arg HF_AUTH_TOKEN=$HF_AUTH_TOKEN .
    3. docker run --gpus all -p 8000:8000 banana-sd

Sending requests

See sd-mui source for more info, but basically, it's:

{
  "modelInputs": {
    "prompt": "Super dog",
    "num_inference_steps": 50,
    "guidance_scale": 7.5,
    "width": 512,
    "height": 512,
    "seed": 3239022079
  },
  "callInputs": {
    "MODEL_ID": "CompVis/stable-diffusion-v1-4",
    "PIPELINE": "StableDiffusionPipeline",
    "SCHEDULER": "LMS",
    "safety_checker": true,
  },
}

If provided, init_image and mask_image should be base64 encoded.

Sorry, but this format might change without notice based on the needs of SD-MUI. It's been stable for a while but we make no promises. Your best bet is always to keep up-to-date with the SD-MUI source.

There are also very basic examples in test.py, which you can view and call python test.py if the container is already running on port 8000.

Keeping forks up to date

Per your personal preferences, rebase or merge, e.g.

  1. git fetch upstream
  2. git merge upstream/main
  3. git push

Or, if you're confident, do it in one step with no confirmations:

git fetch upstream && git merge upstream/main --no-edit && git push

Check scripts/permute.sh and your git remotes, some URLs are hardcoded, I'll make this easier in a future release.

Event logs / performance data

Set CALL_URL and SIGN_KEY environment variables to send timing data on init and inference start and end data. You'll need to check the source code of here and sd-mui as the format is in flux.

Original Template README follows

🍌 Banana Serverless

This repo gives a basic framework for serving Stable Diffusion in production using simple HTTP servers.

Quickstart:

  1. Create your own private repo and copy the files from this template repo into it. You'll want a private repo so that your huggingface keys are secure.

  2. Install the Banana Github App to your new repo.

  3. Login in to the Banana Dashboard and setup your account by saving your payment details and linking your Github.

  4. Create huggingface account to get permission to download and run Stable Diffusion text-to-image model.

  1. Edit the dockerfile in your forked repo with ENV HF_AUTH_TOKEN=your_auth_token

  2. Push that repo to main.

From then onward, any pushes to the default repo branch (usually "main" or "master") trigger Banana to build and deploy your server, using the Dockerfile. Throughout the build we'll sprinkle in some secret sauce to make your server extra snappy 🔥

It'll then be deployed on our Serverless GPU cluster and callable with any of our serverside SDKs:

You can monitor buildtime and runtime logs by clicking the logs button in the model view on the Banana Dashboard


Use Banana for scale.

About

Stable Diffusion base with multiple models, pipelines & schedulers, for Banana

https://kiri.art/

License:MIT License


Languages

Language:Python 75.2%Language:Shell 15.8%Language:Dockerfile 9.0%