chenxingshenSecond / EmoTalk_release

This is the official source for our ICCV 2023 paper "EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Psyche AI Inc release

EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation [ICCV2023]

Official PyTorch implementation for the paper:

EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation, ICCV 2023.

Ziqiao Peng, Haoyu Wu, Zhenbo Song, Hao Xu, Xiangyu Zhu, Hongyan Liu, Jun He, Zhaoxin Fan

License ↗

Given audio input expressing different emotions, EmoTalk produces realistic 3D facial animation sequences with corresponding emotional expressions as outputs.

Environment

  • Linux
  • Python 3.8.8
  • Pytorch 1.12.1
  • CUDA 11.3
  • Blender 3.4.1
  • ffmpeg 4.4.1

Clone the repo:

git clone https://github.com/psyai-net/EmoTalk_release.git
cd EmoTalk_release

Create conda environment:

conda create -n emotalk python=3.8.8
conda activate emotalk
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt

Demo

Download Blender and put it in this directory.

wget https://mirror.freedif.org/blender/release/Blender3.4/blender-3.4.1-linux-x64.tar.xz
tar -xf blender-3.4.1-linux-x64.tar.xz
mv blender-3.4.1-linux-x64 blender && rm blender-3.4.1-linux-x64.tar.xz

Download the pretrained models from EmoTalk.pth . Put the pretrained models under pretrain_model folder. Put the audio under aduio folder and run

python demo.py --wav_path "./audio/disgust.wav"

The generated animation will be saved in result folder.

Dataset

Coming soon...

Citation

If you find this work useful for your research, please cite our paper:

  @inproceedings{peng2023emotalk,
    title={EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation}, 
    author={Ziqiao Peng and Haoyu Wu and Zhenbo Song and Hao Xu and Xiangyu Zhu and Hongyan Liu and Jun He and Zhaoxin Fan},
    journal={arXiv preprint arXiv:2303.11089},
    year={2023}
  }

Acknowledgement

Here are some great resources we benefit:

Thanks to John Hable for sharing his head template under the CC0 license, which is very helpful for us to visualize the results.

Contact

For research purpose, please contact pengziqiao@ruc.edu.cn

For commercial licensing, please contact fanzhaoxin@psyai.net

License

This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License. Please read the LICENSE file for more information.

Invitation

We invite you to join Psyche AI Inc to conduct cutting-edge research and business implementation together. At Psyche AI Inc, we are committed to pushing the boundaries of what's possible in the fields of artificial intelligence and computer vision, especially their applications in avatars. As a member of our team, you will have the opportunity to collaborate with talented individuals, innovate new ideas, and contribute to projects that have a real-world impact.

If you are passionate about working on the forefront of technology and making a difference, we would love to hear from you. Please visit our website at Psyche AI Inc to learn more about us and to apply for open positions. You can also contact us by fanzhaoxin@psyai.net.

Let's shape the future together!!

About

This is the official source for our ICCV 2023 paper "EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation"

License:Other


Languages

Language:Python 100.0%