There are 33 repositories under lip-sync topic.
Next generation face swapper and enhancer
Rhubarb Lip Sync is a command-line tool that automatically creates 2D mouth animation from voice recordings. You can use it for characters in computer games, in animated cartoons, or in any other project that requires animating mouths based on existing recordings.
Wav2Lip UHQ extension for Automatic1111
Extension of Wav2Lip repository for processing high-quality videos.
MuseTalk: Real-Time High Quality Lip Synchorization with Latent Space Inpainting
Next generation face swapper and enhancer
A simple Google Colab notebook which can translate an original video into multiple languages along with lip sync.
Next generation face swapper and enhancer
Talking Head (3D): A JavaScript class for real-time lip-sync using Ready Player Me full-body 3D avatars.
Learning Lip Sync of Obama from Speech Audio
3D Avatar Lip Synchronization from speech (JALI based face-rigging)
Keras version of Syncnet, by Joon Son Chung and Andrew Zisserman.
This project is a digital human that can talk and listen to you. It uses OpenAI's GPT-3 to generate responses, OpenAI's Whisper to transcript the audio, Eleven Labs to generate voice and Rhubarb Lip Sync to generate the lip sync.
AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
YerFace! A stupid facial performance capture engine for cartoon animation.
A Python GUI script designed to work with Rhubarb Lip Sync to create mouth animation fast and easy in just mere seconds (depending on video length)
AI Lip Syncing application, deployed on Streamlit
Audio-Visual Lip Synthesis via Intermediate Landmark Representation
Zippy Talking Avatar uses Azure Cognitive Services and OpenAI API to generate text and speech. It is built with Next.js and Tailwind CSS. This avatar responds to user input by generating both text and speech, offering a dynamic and immersive user experience
A package for simple, expressive, and customizable text-to-speech with an animated face.
Adventure Game Studio (AGS) module for lip sync
Create deepfake video by just uploading the original video and specifying the text the character will read
Lip Language Video Data
Godot audio processing.
AR based android application using image processing and machine learning techniques, that makes a still images look like they are talking with audio generation and lip movements synced over that audio