There are 2 repositories under talking-heads topic.
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
ICASSP 2022: "Text2Video: text-driven talking-head video synthesis with phonetic dictionary".
Code for ACCV 2020 "Speech2Video Synthesis with 3D Skeleton Regularization and Expressive Body Poses"
A curated list of 'Talking Head Generation' resources. Features influential papers, groundbreaking algorithms, crucial GitHub repositories, insightful videos, and more. Ideal for AI enthusiasts, researchers, and graphics professionals
AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
Zippy Talking Avatar uses Azure Cognitive Services and OpenAI API to generate text and speech. It is built with Next.js and Tailwind CSS. This avatar responds to user input by generating both text and speech, offering a dynamic and immersive user experience
Animated Characters: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
Talking Avatar: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
Build output of the talking heads main ui repo
Implementation of a method to lipreading using landmark from 3D talking head