While making this I figured out that I could just extract the lora and apply it to the v3 motion model to use it as it is with any Animatediff-Evolved workflow, the merged v3 checkpoint along with the spatial lora converted to .safetensors, are available here:
https://huggingface.co/Kijai/MagicTime-merged-fp16
This does NOT need this repo, I will not be updating this further.
magictime_example.mp4
Either use the Manager and it's install from git -feature, or clone this repo to custom_nodes and run:
pip install -r requirements.txt
or if you use portable (run this in ComfyUI_windows_portable -folder):
python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI-MagicTimeWrapper\requirements.txt
You can use any 1.5 model, and the v3 AnimateDiff motion model
placed in ComfyUI/models/animatediff_models
:
https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_mm.ckpt
rest (131.0 MB) is auto downloaded, from https://huggingface.co/BestWishYsh/MagicTime/tree/main/Magic_Weights
to ComfyUI/modes/magictime
https://github.com/PKU-YuanGroup/MagicTime
ChronoMagic with 2265 metamorphic time-lapse videos, each accompanied by a detailed caption. We released the subset of ChronoMagic used to train MagicTime. The dataset can be downloaded at Google Drive. Some samples can be found on our Project Page.
-
Animatediff The codebase we built upon and it is a strong U-Net-based text-to-video generation model.
-
Open-Sora-Plan The codebase we built upon and it is a simple and scalable DiT-based text-to-video generation repo, to reproduce Sora.
- The majority of this project is released under the Apache 2.0 license as found in the LICENSE file.
- The service is a research preview intended for non-commercial use only. Please contact us if you find any potential violations.
If you find our paper and code useful in your research, please consider giving a star β and citation π.
@misc{yuan2024magictime,
title={MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators},
author={Shenghai Yuan and Jinfa Huang and Yujun Shi and Yongqi Xu and Ruijie Zhu and Bin Lin and Xinhua Cheng and Li Yuan and Jiebo Luo},
year={2024},
eprint={2404.05014},
archivePrefix={arXiv},
primaryClass={cs.CV}
}