There are 5 repositories under huggingface-diffusers topic.
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
Inpaint Anything extension performs stable diffusion inpainting on a browser UI using masks from Segment Anything.
Colab notebook for Stable Diffusion Hyper-SDXL.
Inpaint Anything performs stable diffusion inpainting on a browser UI using masks from Segment Anything.
Dreambooth implementation based on Stable Diffusion with minimal code.
🤗 Unofficial huggingface/diffusers-based implementation of the paper "Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis".
A simple web application that lets you replace any part of an image with an image generated based on your description.
A fine-tuned model based on Stable Diffusion to create images in the style of Midjourney
Quantized stable-diffusion cutting down memory 75%, testing in streamlit, deploying in container
🤗 HuggingFace Diffusers Flax TPU and PyTorch GPU for Colab
Dreambooth for colab
Morpheus is an open-source project that offers a creative and innovative platform for generating stunning artworks using image editing and stable diffusion models.
Diffusers API in OCaml
Collection of OSS models that are containerized into a serving container
Toolchain for creating custom datasets and training Stable Diffusion (1.x, 2.x, XL) models and LoRAs
Implementation of Paint-with-words with Stable Diffusion using diffusers pipeline: method from eDiff-I that let you generate image from text-labeled segmentation map.
Using SageMaker and LoRA to fine-tune the Stable Diffusion model and generate fashion images
This is a repo providing same stable diffusion experiments, regarding textual inversion task and captioning task
diffusion model for unconditional image generation of Bored Apes
Easily create your own AI avatar images!
Configuration files for building E621-Rising v3 SDXL model and dataset
Experimental Stable Diffusion XL Webui
a tool to create synthetic image data
Generating images with diffusion models on a mobile device, with an intranet GPU box as backend
Here I have used Stable Diffusion with the diffusers from hugging face, took one image as input and then added elements into it by using diffusion algorithm, and iterated this process three/four times with different type of elements, adding by text inputs.
This is an adaption of the notebook , which is provided as part of a class by Hugging Face
A web app that allows you to select a subject and then change its background, OR keep the background and change the subject.
My own implementation of Stable Diffusion for me to generate reference art
Generate images with a text prompt.
Generates images from input text.
Generate 3D assets with a text prompt or an image.