Sxrjs2010 / ComfyUI-DragNUWA

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

This is an implementation of DragNUWA for ComfyUI

DragNUWA: DragNUWA enables users to manipulate backgrounds or objects within images directly, and the model seamlessly translates these actions into camera movements or object motions, generating the corresponding video.

Install

  1. Clone this repo into custom_nodes directory of ComfyUI location

  2. Run pip install -r requirements.txt

  3. Download the weights of DragNUWA drag_nuwa_svd.pth and put it to ComfyUI/models/checkpoints/drag_nuwa_svd.pth

For chinese users:drag_nuwa_svd.pth

smaller and faster fp16 model: dragnuwa-svd-pruned.fp16.safetensors from https://github.com/painebenjamin/app.enfugue.ai

For chinese users: wget https://hf-mirror.com/benjamin-paine/dragnuwa-pruned-safetensors/resolve/main/dragnuwa-svd-pruned.fp16.safetensors 不能直接在浏览器下载,或者参照 https://hf-mirror.com/ 官方使用说明

Nodes

Two nodes Load CheckPoint DragNUWA & DragNUWA Run

Tools

Motion Traj Tool Generate motion trajectories

Examples

  1. base workflow

https://github.com/chaojie/ComfyUI-DragNUWA/blob/main/workflow.json

  1. auto traj video generation (working on)

one flow: video -> dwpose -> keypoints -> trajectory -> DragNUWA (dragposecontrol animateanyone)

  1. optical flow workflow

Thanks for Fannovol16's Unimatch_ OptFlowPreprocessor Thanks for toyxyz's load optical flow from directory

https://github.com/chaojie/ComfyUI-DragNUWA/blob/main/workflow optical_flow.json

About

License:MIT License


Languages

Language:Python 95.8%Language:HTML 4.1%Language:Shell 0.0%