There are 60 repositories under talking-head topic.
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
Wav2Lip UHQ extension for Automatic1111
[CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"
Real time interactive streaming digital human
Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning
Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation
CVPR2023 talking face implementation for Identity-Preserving Talking Face Generation With Landmark and Appearance Priors
💬 An extensive collection of exceptional resources dedicated to the captivating world of talking face synthesis! ⭐ If you find this repo useful, please give it a star! 🤩
ICASSP 2022: "Text2Video: text-driven talking-head video synthesis with phonetic dictionary".
code for paper "Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion" in the conference of IJCAI 2021
This is the official repository for OTAvatar: One-shot Talking Face Avatar with Controllable Tri-plane Rendering [CVPR2023].
The official code of our ICCV2023 work: Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head video Generation
Talking Head (3D): A JavaScript class for real-time lip-sync using Ready Player Me full-body 3D avatars.
Long-Inference, High Quality Synthetic Speaker (AI avatar/ AI presenter)
Freeform Body Motion Generation from Speech
A Survey on Deepfake Generation and Detection
The authors' implementation of the "Neural Head Reenactment with Latent Pose Descriptors" (CVPR 2020) paper.
[CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models
[ECCV 2024] EDTalk - Official PyTorch Implementation
PyTorch implementation for NED (CVPR 2022). It can be used to manipulate the facial emotions of actors in videos based on emotion labels or reference styles.
The pytorch implementation of our WACV23 paper "Cross-identity Video Motion Retargeting with Joint Transformation and Synthesis".
Avatar Generation For Characters and Game Assets Using Deep Fakes
DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models
Code for ACCV 2020 "Speech2Video Synthesis with 3D Skeleton Regularization and Expressive Body Poses"
Crystal TTVS engine is a real-time audio-visual Multilingual speech synthesizer with a 3D expressive avatar.
📖 A curated list of resources dedicated to avatar.
Wanted an AI waifu but don't know how to create one? Now you have the opportunity to "animate" your favourite character from anime or manga. Subscribe to my youtube and telegram channels and give this project a star. Enjoy using it :3
Daily tracking of awesome avatar papers, including 2d talking head, 3d head avatar, body avatar.
AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior, CVPRW 2024