nanobeep / Dreambooth-brian6091

Dreambooth-style fine tuning of Stable Diffusion models

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Dreambooth-style fine tuning of Stable Diffusion models

Link to notebook for classic Dreambooth:

Train In Colab

If you're looking to fine-tune using Low-rank Adaptation (LoRA), you can find a notebook on this branch, or follow this link:

Train In Colab

Tested with Tesla T4 and A100 GPUs on Google Colab.

Tested with Stable Diffusion v1-5 and Stable Diffusion v2-base.

There are lots of notebooks for Dreambooth-style training. This one borrows elements from ShivamShrirao's implementation, but is distinguished by some additional features:

  • based on Hugging Face Diffusers🧨 implementation so it's easy to stay up-to-date
  • exposes lesser-explored parameters for experimentation (ADAM optimizer parameters, cosine_with_restarts learning rate scheduler, etc), all of which are dumped to a json file so you can remember what you did
  • possibility to drop some text-conditioning to improve classifier-free guidance sampling (e.g., how SD V1-5 was fine-tuned)
  • training loss and prior class loss are tracked separately (can be visualized using tensorboard)
  • option to generate exponentially-weighted moving average (EMA) weights for the unet
  • easily switch in different variational autoencoders (VAE) or text encoders
  • inference with trained models is done using Diffusers🧨 pipelines, does not rely on any web-apps

Buy Me A Coffee

About

Dreambooth-style fine tuning of Stable Diffusion models

License:Apache License 2.0


Languages

Language:Python 53.9%Language:Jupyter Notebook 46.1%