This is a web interface frontend for the generation of images using diffusion models.
The goal is to provide an interface to online and offline backends doing image generation and inpainting like Stable Diffusion.
The documentation is available here
Diffusion UI was made using:
- Text-to-image
- Image-to-Image:
- from an uploaded image
- from a drawing made on the interface
- Inpainting
- Including the possibility to draw inside an inpainting region
- Modular support for different backends:
- a basic Stable Diffusion backend
- the full-featured automatic1111 fork
- the online free Stable Horde
- Modification of model parameters in left tab
- Image gallery of previous image in the right tab
- Allow to do variations and inpainting edits to previously generated images
- Share the backend on your PC to use it on your smartphone or tablet
The frontend is available at diffusionui.com (Note: You still need to have a local backend to make it work with Stable diffusion)
Or alternatively you can run it locally.
To install the Stable Diffusion backend, follow the instructions in the docs
To use Automatic1111 fork of Stable Diffusion from your own pc, follow the instructions here
If you can't run it locally, it is also possible to use the automatic1111 fork of Stable Diffusion with diffusion-ui online for free with this Google Colab notebook
To generate images for free using the Stable Horde, follow the instructions here
MIT License for the code here.
CreativeML Open RAIL-M license for the Stable Diffusion model.