kangyeolk / AnimeCeleb

Official implementation of "AnimeCeleb: Large-Scale Animation CelebHeads Dataset for Head Reenactment" (ECCV 2022)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AnimeCeleb — Official Dataset & PyTorch Implementation

***** Follow-up research by our team is available at https://github.com/kangyeolk/Paint-by-Sketch *****
***** New: Follow-up research by our team is available in this repository *****

Teaser image

AnimeCeleb: Large-Scale Animation CelebHeads Dataset for Head Reenactment
Kangyeol Kim*1,4, Sunghyun Park*1, Jaeseong Lee*1, Sunghyo Chung2, Junsoo Lee3, Jaegul Choo1
1KAIST, 2Korea University, 3Naver Webtoon, 4Letsur Inc.
In ECCV 2022. (* indicates equal contribution)

Paper: https://arxiv.org/abs/2111.07640

Abstract: We present a novel Animation CelebHeads dataset (AnimeCeleb) to address an animation head reenactment. Different from previous animation head datasets, we utilize a 3D animation models as the controllable image samplers, which can provide a large amount of head images with their corresponding detailed pose annotations. To facilitate a data creation process, we build a semi-automatic pipeline leveraging an open 3D computer graphics software with a developed annotation system. After training with the AnimeCeleb, recent head reenactment models produce high-quality animation head reenactment results, which are not achievable with existing datasets. Furthermore, motivated by metaverse application, we propose a novel pose mapping method and architecture to tackle a cross-domain head reenactment task. During inference, a user can easily transfer one's motion to an arbitrary animation head. Experiments demonstrate an usefulness of the AnimeCeleb to train animation head reenactment models, and the superiority of our cross-domain head reenactment model compared to state-of-the-art methods.

Expression Domain Translation Network — Official Implementation

Source | Driving | Animo | (New) EDTN

Project Page: https://keh0t0.github.io/research/EDTN/
Paper: https://arxiv.org/abs/2310.10073

TL;DR

This repository consists of 3 parts as follows:

  • Downloadable links of AnimeCeleb (click here) and author list (click here).
  • A source code of the proposed algorithm for cross-domain head reenactment (click here).
  • A source code of expression domain translation network for cross-domain head reenactment (click here) (New)

Citation

If you find this work useful for your research, please cite our paper:

@inproceedings{kim2021animeceleb,
  title={AnimeCeleb: Large-Scale Animation CelebHeads Dataset for Head Reenactment},
  author={Kim, Kangyeol and Park, Sunghyun and Lee, Jaeseong and Chung, Sunghyo and Lee, Junsoo and Choo, Jaegul},
  booktitle={Proc. of the European Conference on Computer Vision (ECCV)},
  year={2022}
}
@misc{kang2023expression,
      title={Expression Domain Translation Network for Cross-domain Head Reenactment}, 
      author={Taewoong Kang and Jeongsik Oh and Jaeseong Lee and Sunghyun Park and Jaegul Choo},
      year={2023},
      eprint={2310.10073},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgments

We appreciate other outstanding projects: series of talking-head-anime, Making Anime faces with StyleGan that inspired us. Also, we would like to thank the original authors of the collected 3D model, and open the list of their names and URLs in this file. The model code borrows heavily from FOMM and PIRenderer.

About

Official implementation of "AnimeCeleb: Large-Scale Animation CelebHeads Dataset for Head Reenactment" (ECCV 2022)


Languages

Language:Python 100.0%