This is an extension of Stable-Diffusion-Webui. You can use this extension to make your video smooth! Let's say goodbye to flicker.
- Open "Extensions" tab.
- Open "Install from URL" tab in the tab.
- Enter
https://github.com/Artiprocher/sd-webui-fastblend.git
to "URL for extension's git repository". - Press "Install" button.
- Wait for 5 seconds, and you will see the message "Installed into stable-diffusion-webui\extensions\sd-webui-fastblend. Use Installed tab to restart".
- Go to "Installed" tab, click "Check for updates", and then click "Apply and restart UI".
- Enjoy the coherent and fluent videos!
video_merge.mp4
- The original video is here. We only use the first 236 frames.
- Re-render each frame independently. The parameters are
- Prompt: masterpiece, best quality, anime screencap, cute, petite, long hair, black hair, blue eyes, hoodie, breasts, smile, short sleeves, hands, blue bowknot, wind, depth of field, forest, close-up,
- Negative prompt: (worst quality, low quality:1.4), monochrome, zombie, (interlocked fingers:1.2), extra arms,
- Steps: 20,
- Sampler: DPM++ 2M Karras,
- CFG scale: 7,
- Seed: 3010302656,
- Size: 768x512,
- Model hash: 4c79dd451a,
- Model: aingdiffusion_v90,
- Denoising strength: 1,
- Clip skip: 2,
- ControlNet 0: "Module: tile_resample, Model: control_v11f1e_sd15_tile [a371b31b], Weight: 0.4, Resize Mode: Crop and Resize, Low Vram: False, Threshold A: 1, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced",
- ControlNet 1: "Module: softedge_pidinet, Model: control_v11p_sd15_softedge [a8575a2a], Weight: 1, Resize Mode: Crop and Resize, Low Vram: False, Processor Res: 512, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced",
- ControlNet 2: "Module: depth_midas, Model: control_v11f1p_sd15_depth [cfd03158], Weight: 1, Resize Mode: Crop and Resize, Low Vram: False, Processor Res: 512, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced",
- Version: v1.6.0
- Open "FastBlend" tab. Upload the original video to "Guide video". Upload the re-rendered video to "Style video". We use the following settings:
- Inference mode: Fast mode
- Sliding window size: 30
- Patch size: 11
- Number of iterations: 6
- Guide weight: 100.0
- GPU ID:0 (You must ensure you have an Nvidia GPU.)
- Post-process (contrast): 1.0
- Post-process (sharpness): 1.0
- Click "Run". Wait a minute... (I tested this extension on an Nvidia RTX3060 laptop. It cost about half hours.)
- Now you have obtained a fluent video. Go to "Extras" to upscale it using "R-ESRGAN 4+ Anime6B".