yzhang2016 / SadTalker

(CVPR 2023)SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation

Home Page:https://sadtalker.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

            Open In Colab       Hugging Face Spaces

Wenxuan Zhang *,1,2Xiaodong Cun *,2Xuan Wang 3Yong Zhang 2Xi Shen 2
Yu Guo1 Ying Shan 2   Fei Wang 1

1 Xi'an Jiaotong University   2 Tencent AI Lab   3 Ant Group  

CVPR 2023

sadtalker

TL;DR:       single portrait image 🙎‍♂️      +       audio 🎤       =       talking head video 🎞.


🔥 Highlight

  • 🔥 Beta version of the full image mode is online! checkout here for more details.
still still + enhancer input image @bagbag1815
still1_n.mp4
still_e_n.mp4
image
  • 🔥 Several new mode, eg, still mode, reference mode, resize mode are online for better and custom applications.
  • 🔥 Happy to see our method is used in various talking or singing avatar, checkout these wonderful demos at bilibili and twitter #sadtalker.

📋 Changelog

  • [2023.03.30]: Launch beta version of the full body mode.

  • [2023.03.30]: Launch new feature: through using reference videos, our algorithm can generate videos with more natural eye blinking and some eyebrow movement.

  • [2023.03.29]: resize mode is online by python infererence.py --preprocess resize! Where we can produce a larger crop of the image as discussed in OpenTalker#35.

  • [2023.03.29]: local gradio demo is online! python app.py to start the demo. New requirments.txt is used to avoid the bugs in librosa.

  • [2023.03.28]: Online demo is launched in Hugging Face Spaces, thanks AK!

    

Previous Changelogs

  • [2023.03.22]: Launch new feature: generating the 3d face animation from a single image. New applications about it will be updated.

  • [2023.03.22]: Launch new feature: still mode, where only a small head pose will be produced via python inference.py --still.

  • [2023.03.18]: Support expression intensity, now you can change the intensity of the generated motion: python inference.py --expression_scale 1.3 (some value > 1).

  • [2023.03.18]: Reconfig the data folders, now you can download the checkpoint automatically using bash scripts/download_models.sh.

  • [2023.03.18]: We have offically integrate the GFPGAN for face enhancement, using python inference.py --enhancer gfpgan for better visualization performance.

  • [2023.03.14]: Specify the version of package joblib to remove the errors in using librosa, Open In Colab is online!

  • [2023.03.06]: Solve some bugs in code and errors in installation

  • [2023.03.03]: Release the test code for audio-driven single image animation!

  • [2023.02.28]: SadTalker has been accepted by CVPR 2023!

🎼 Pipeline

main_of_sadtalker

🚧 TODO

Previous TODOs
  • Generating 2D face from a single Image.
  • Generating 3D face from Audio.
  • Generating 4D free-view talking examples from audio and a single image.
  • Gradio/Colab Demo.
  • Full body/image Generation.
  • training code of each componments.
  • Audio-driven Anime Avatar.
  • interpolate ChatGPT for a conversation demo 🤔
  • integrade with stable-diffusion-web-ui. (stay tunning!)
sadtalker_demo_short.mp4

🔮 Installation

Dependence Installation

CLICK ME For Mannual Installation
git clone https://github.com/Winfredy/SadTalker.git
cd SadTalker 
conda create -n sadtalker python=3.8
source activate sadtalker
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
conda install ffmpeg
pip install dlib-bin # [dlib-bin is much faster than dlib installation] conda install dlib 
pip install -r requirements.txt

### install gpfgan for enhancer
pip install git+https://github.com/TencentARC/GFPGAN

CLICK For Docker Installation

A dockerfile are also provided by @thegenerativegeneration in docker hub, which can be used directly as:

docker run --gpus "all" --rm -v $(pwd):/host_dir wawa9000/sadtalker \
    --driven_audio /host_dir/deyu.wav \
    --source_image /host_dir/image.jpg \
    --expression_scale 1.0 \
    --still \
    --result_dir /host_dir

Trained Models

CLICK ME

You can run the following script to put all the models in the right place.

bash scripts/download_models.sh

OR download our pre-trained model from google drive or our github release page, and then, put it in ./checkpoints.

Model Description
checkpoints/auido2exp_00300-model.pth Pre-trained ExpNet in Sadtalker.
checkpoints/auido2pose_00140-model.pth Pre-trained PoseVAE in Sadtalker.
checkpoints/mapping_00229-model.pth.tar Pre-trained MappingNet in Sadtalker.
checkpoints/facevid2vid_00189-model.pth.tar Pre-trained face-vid2vid model from the reappearance of face-vid2vid.
checkpoints/epoch_20.pth Pre-trained 3DMM extractor in Deep3DFaceReconstruction.
checkpoints/wav2lip.pth Highly accurate lip-sync model in Wav2lip.
checkpoints/shape_predictor_68_face_landmarks.dat Face landmark model used in dilb.
checkpoints/BFM 3DMM library file.
checkpoints/hub Face detection models used in face alignment.

🔮 Inference Demo

Generating 2D face from a single Image

python inference.py --driven_audio <audio.wav> \
                    --source_image <video.mp4 or picture.png> \
                    --batch_size <default equals 2, a larger run faster> \
                    --expression_scale <default is 1.0, a larger value will make the motion stronger> \
                    --result_dir <a file to store results> \
                    --still <add this flag will show fewer head motion> \
                    --preprocess <resize or crop the input image, default is crop> \
                    --enhancer <default is None, you can choose gfpgan or RestoreFormer> \
                    --full_img_enhancer <default is None, you can choose gfpgan or RestoreFormer> \
                    --ref_eyeblink <default is None, ref_eyeblink is used to provide more natural eyebrow movement and eye blinking> \ 
                    --ref_pose <default is None, ref_pose is used to provide head pose> 
basic w/ still mode w/ exp_scale 1.3 w/ gfpgan
art_0.japanese.mp4
art_0.japanese_still.mp4
art_0.japanese_scale1.3.mp4
art_0.japanese_es1.mp4

Kindly ensure to activate the audio as the default audio playing is incompatible with GitHub.

Input, w/ reference video , reference video
free_view
If the reference video is shorter than the input audio, we will loop the reference video .

Generating 3D face from Audio

Input Animated 3d face
3dface.mp4

Kindly ensure to activate the audio as the default audio playing is incompatible with GitHub.

More details to generate the 3d face can be founded here

Generating 4D free-view talking examples from audio and a single image

We use input_yaw, input_pitch, input_roll to control head pose. For example, --input_yaw -20 30 10 means the input head yaw degree changes from -20 to 30 and then changes from 30 to 10.

python inference.py --driven_audio <audio.wav> \
                    --source_image <video.mp4 or picture.png> \
                    --result_dir <a file to store results> \
                    --input_yaw -20 30 10

free_view

[Beta] Full body/image Generation

Now you can use --still to generate a natural full body video. You can add enhancer or full_img_enhancer to improve the quality of the generated video. However, if you add other mode, such as ref_eyeblinking, ref_pose, the result will be bad. We are still trying to fix this problem.

python inference.py --driven_audio <audio.wav> \
                    --source_image <video.mp4 or picture.png> \
                    --result_dir <a file to store results> \
                    --still \
                    --enhancer gfpgan \
                    --full_img_enhancer gfpgan
still still + enhancer still+full_img_enhancer input image @bagbag1815
still1_n.mp4
still_e_n.mp4
still_full_e_n.mp4
image

🛎 Citation

If you find our work useful in your research, please consider citing:

@article{zhang2022sadtalker,
  title={SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation},
  author={Zhang, Wenxuan and Cun, Xiaodong and Wang, Xuan and Zhang, Yong and Shen, Xi and Guo, Yu and Shan, Ying and Wang, Fei},
  journal={arXiv preprint arXiv:2211.12194},
  year={2022}
}

💗 Acknowledgements

Facerender code borrows heavily from zhanglonghao's reproduction of face-vid2vid and PIRender. We thank the authors for sharing their wonderful code. In training process, We also use the model from Deep3DFaceReconstruction and Wav2lip. We thank for their wonderful work.

🥂 Related Works

📢 Disclaimer

This is not an official product of Tencent. This repository can only be used for personal/research/non-commercial purposes.

LOGO: color and font suggestion: ChatGPT, logo font:Montserrat Alternates .

All the copyright demo images are from communities users or the geneartion from stable diffusion. Free free to contact us if you feel uncomfortable.

About

(CVPR 2023)SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation

https://sadtalker.github.io/

License:MIT License


Languages

Language:Python 97.5%Language:Jupyter Notebook 1.9%Language:Shell 0.3%Language:Dockerfile 0.2%