taichuai / AnimateAnyone-unofficial

Unofficial Implementation of Animate Anyone

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Unofficial Implementation of Animate Anyone

If you find this repository helpful, please consider giving us a star⭐!

Overview

This repository contains an simple and unofficial implementation of Animate Anyone. This project is built upon magic-animate and AnimateDiff.

News 🤗🤗🤗

The first training phase basic test passed, currently in training and testing the second phase.

Training may be slow due to GPU shortage.😢

It only takes a few days to release the weights.😄

Sample of Stage 1 Result on UBC-fashion dataset

Special thanks to Zhenzhi Wang for assistance with code development and training. The current version of the face also has some artifacts. Also, this is a model trained on a UBC dataset rather than a large-scale dataset.

Note !!!

This project is under continuous development in part-time, there may be bugs in the code, welcome to correct them, I will optimize the code after the pre-trained model is released!

In the current version, we recommend training on 8 or 16 A100,H100 (80G) at 512 or 768 resolution. Low resolution (256,384) does not give good results!!!(VAE is very poor at reconstruction at low resolution.)

ToDo

  • Release Training Code.
  • Release Inference Code.
  • Release Unofficial Pre-trained Weights. (Note:Train on public datasets instead of large-scale private datasets, just for academic research.🤗)
  • Release Gradio Demo.

Requirements

bash fast_env.sh

🎬Gradio Demo (will publish with weights.)

python3 -m demo.gradio_animate

If you only have a GPU with 24 GB of VRAM, I recommend inference at resolution 512 and below.

Training

First Stage

torchrun --nnodes=8 --nproc_per_node=8 train.py --config configs/training/train_stage_1.yaml

Second Stage

torchrun --nnodes=8 --nproc_per_node=8 train.py --config configs/training/train_stage_2.yaml

Acknowledgements

Special thanks to the original authors of the Animate Anyone project and the contributors to the magic-animate and AnimateDiff repository for their open research and foundational work that inspired this unofficial implementation.

Email

guoqin@stu.pku.edu.cn

My response may be slow, please don't ask me nonsense questions.

About

Unofficial Implementation of Animate Anyone


Languages

Language:Python 99.8%Language:Shell 0.2%