cyoyo-geek / ComfyUI-AnimateDiff-Evolved

Improved AnimateDiff for ComfyUI

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AnimateDiff for ComfyUI

Improved AnimateDiff integration for ComfyUI, initially adapted from sd-webui-animatediff but changed greatly since then. Please read the AnimateDiff repo README for more information about how it works at its core.

Examples shown here will also often make use of two helpful set of nodes:

  • ComfyUI-Advanced-ControlNet for loading files in batches and controlling which latents should be affected by the ControlNet inputs (work in progress, will include more advance workflows + features for AnimateDiff usage later).
  • comfy_controlnet_preprocessors for ControlNet preprocessors not present in vanilla ComfyUI; this repo is archived, and future development by the dev will happen here: comfyui_controlnet_aux. While most preprocessors are common between the two, some give different results. Workflows linked here use the archived version, comfy_controlnet_preprocessors. (TODO: I'll reinvestigate with more recent changes and update as needed)

Installation

If using Comfy Manager:

  1. Look for AnimateDiff, and be sure it is (Kosinkadink version). Install it. image

If installing manually:

  1. Clone this repo into custom_nodes folder.

How to Use:

  1. Download motion modules. You will need at least 1. Different modules produce different results.
  2. Place models in ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/models. They can be renamed if you want.
  3. Get creative! If it works for normal image generation, it (probably) will work for AnimateDiff generations. Latent upscales? Go for it. ControlNets, one or more stacked? You betcha. Masking the conditioning of ControlNets to only affect part of the animation? Sure. Try stuff and you will be surprised by what you can do. Samples with workflows are included below.

Features:

  • Compatible with a variety of samplers, vanilla KSampler nodes and KSampler (Effiecient) nodes.
  • ControlNet support - both per-frame, or "interpolating" between frames; can kind of use this as img2video (see workflows below)
  • Infinite animation length support using sliding context windows (introduced 9/17/23)

Upcoming features:

  • Prompt travel, and in general more control over per-frame conditioning
  • Alternate context schedulers and context types

Core Nodes:

AnimateDiff Loader

image

The only required node to use AnimateDiff, the Loader outputs a model that will perform AnimateDiff functionality when passed into a sampling node.

Inputs:

  • model: model to setup for AnimateDiff usage. Must be a SD1.5-derived model.
  • context_options: optional context window to use while sampling; if passed in, total animation length has no limit. If not passed in, animation length will be limited to either 24 or 32 frames, depending on motion model.
  • model_name: motion model to use with AnimateDiff.
  • beta_schedule: noise scheduler for SD. sqrt_linear is the intended way to use AnimateDiff, with expected saturation. However, linear can give useful results as well, so feel free to experiment.

Outputs:

  • MODEL: model injected to perform AnimateDiff functions

Usage

To use, just plug in your model into the AnimateDiff Loader. When the output model (and any derivative of it in this pathway) is passed into a sampling node, AnimateDiff will do its thing.

The desired animation length is determined by the latents passed into the sampler. With context_options connected, there is no limit to the amount of latents you can pass in, AKA unlimited animation length. When no context_options are connected, the sweetspot is 16 latents passed in for best results, with a limit of 24 or 32 based on motion model loaded. These same rules apply to Uniform Context Option's context_length. image image

Uniform Context Options

TODO: fill this out

Samples (download or drag images of the workflows into ComfyUI to instantly load the corresponding workflows!)

txt2img

t2i_wf

aaa_readme_00001_

aaa_readme_00003_.webm

txt2img - 48 frame animation with 16 context_length (uniform)

t2i_context_wf

aaa_readme_00017_

aaa_readme_00018_.webm

The rest of these workflows I haven't had the chance to rerun - will update this in a few hours

txt2img w/ latent upscale (partial denoise on upscale)

txt2image_upscale_partialdenoise_workflow

AA_upscale_gif_00007_

txt2img w/ latent upscale (partial denoise on upscale) - 48 frame animation with 16 frame window

txt2image_sliding_upscale_partialdenoise_workflow

TODO: add generated image here (gif is too big for github)

txt2img w/ latent upscale (full denoise on upscale)

txt2image_upscale_workflow

AA_upscale_gif_00001_

txt2img w/ ControlNet-stabilized latent-upscale (partial denoise on upscale, Scaled Soft ControlNet Weights)

txt2image_upscale_controlnetsoftweights_partialdenoise_workflow

AA_upscale_gif_00009_

txt2img w/ ControlNet-stabilized latent-upscale (full denoise on upscale)

txt2image_upscale_controlnet_workflow

AA_upscale_controlnet_gif_00006_

txt2img w/ Initial ControlNet input (using LineArt preprocessor on first txt2img as an example)

txt2image_controlnet_workflow

AA_controlnet_gif_00017_

txt2img w/ Initial ControlNet input (using OpenPose images) + latent upscale w/ full denoise

txt2image_openpose_controlnet_upscale_workflow

(open_pose images provided courtesy of toyxyz)

AA_openpose_cn_gif_00001_

AA_gif_00029_

img2img (TODO: this is outdated and still shows the old flickering version, update this)

Screenshot 2023-07-22 at 22 08 00

AnimateDiff_00002

Known Issues

Some motion models have visible watermark on resulting images (especially when using mm_sd_v15)

Training data used by the authors of the AnimateDiff paper contained Shutterstock watermarks. Since mm_sd_v15 was finetuned on finer, less drastic movement, the motion module attempts to replicate the transparency of that watermark and does not get blurred away like mm_sd_v14. Using other motion modules, or combinations of them using Advanced KSamplers should alleviate watermark issues.

About

Improved AnimateDiff for ComfyUI


Languages

Language:Python 94.8%Language:JavaScript 5.2%