ldsxp / chiaSWARM

Distributed GPU compute

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

chiaSWARM

CodeQL black

Distributed GPU compute or "All these GPUs are idle now. Let's use em for something other than PoW!"

Introduction

The chiaSWARM is a distributed network of GPU nodes, that run AI and ML workloads on behalf of users that may not have the requisite hardware.

GPU nodes are paid in XCH.

This is NOT Proof of Work on chia.

Workloads

Stable Diffusion

The first supported workload is various type of stable diffusion image generation and manipulation.

Open an issue to gain access and give it a try on the swarm network!

Roadmap

  • ✓ Networking and core protocol
  • ✓ Basic stable diffusion workloads (txt2image, img2img, various models)
  • ✓ Image upscale, inpainting, and stable diffusion 2.1
  • ✓ Docker
  • ✓ More stable diffusion workloads (other interesting models & ongoing version bumps)
  • ✓ XCH integration
  • GPT workloads
  • REAL ESRGAN image upscale and face fixing
  • Whatever else catches our fancy

Suggestions, issues and pull requests welcome.

Becoming the SWARM

In order to be a swarm node, you need a CUDA capable NVIDIA GPU with at least 8GB of VRAM; 30XX or better recommended.

Follow the installation instructions to get started.

About

Distributed GPU compute

License:Apache License 2.0


Languages

Language:Python 88.4%Language:Shell 7.1%Language:PowerShell 3.1%Language:Dockerfile 1.4%