magic-research / magic-avatar

MagicAvatar: Multimodal Avatar Generation and Animation

Home Page:https://magic-avatar.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MagicAvatar: Multimodal Avatar Generation and Animation

Jianfeng Zhang* · Hanshu Yan* · Zhongcong Xu* · Jiashi Feng · Jun Hao Liew†
ByteDance Inc.

Paper PDF Project Page Project Page

teaser.mp4

Introducing MagicAvatar, a multi-modal framework capable of converting various input modalities — text, video, and audio — into motion signals that subsequently generate/ animate an avatar.

For more general video editing applications, please also check our latest work MagicEdit!

Citing

If you find our work useful, please consider citing:

@inproceedings{zhang2023magicavatar,
    author    = {Zhang, Jianfeng and Yan, Hanshu and Xu, Zhongcong and Feng, Jiashi and Liew, Jun Hao},
    title     = {MagicAvatar: Multi-modal Avatar Generation and Animation},
    booktitle = {arXiv},
    year      = {2023}
}

@inproceedings{liew2023magicedit,
    author    = {Liew, Jun Hao and Yan, Hanshu and Zhang, Jianfeng and Xu, Zhongcong and Feng, Jiashi},
    title     = {MagicEdit: High-Fidelity and Temporally Coherent Video Editing},
    booktitle = {arXiv},
    year      = {2023}
}

About

MagicAvatar: Multimodal Avatar Generation and Animation

https://magic-avatar.github.io/

License:BSD 3-Clause "New" or "Revised" License