There are 55 repositories under lip-sync topic.
Industry leading face manipulation platform
Real time interactive streaming digital human
MuseTalk: Real-Time High Quality Lip Synchorization with Latent Space Inpainting
Rhubarb Lip Sync is a command-line tool that automatically creates 2D mouth animation from voice recordings. You can use it for characters in computer games, in animated cartoons, or in any other project that requires animating mouths based on existing recordings.
Wav2Lip UHQ extension for Automatic1111
Wunjo CE: Face Swap, Lip Sync, Control Remove Objects & Text & Background, Restyling, Audio Separator, Clone Voice, Video Generation. Open Source, Local & Free.
实时语音交互数字人,支持端到端语音方案(GLM-4-Voice - THG)和级联方案(ASR-LLM-TTS-THG)。可自定义形象与音色,无须训练,支持音色克隆,首包延迟低至3s。Real-time voice interactive digital human, supporting end-to-end voice solutions (GLM-4-Voice - THG) and cascaded solutions (ASR-LLM-TTS-THG). Customizable appearance and voice, supporting voice cloning, with initial package delay as low as 3s.
Extension of Wav2Lip repository for processing high-quality videos.
Talking Head (3D): A JavaScript class for real-time lip-sync using Ready Player Me full-body 3D avatars.
Industry leading face manipulation platform
Industry leading face manipulation platform
A simple Google Colab notebook which can translate an original video into multiple languages along with lip sync.
This project is a digital human that can talk and listen to you. It uses OpenAI's GPT to generate responses, OpenAI's Whisper to transcript the audio, Eleven Labs to generate voice and Rhubarb Lip Sync to generate the lip sync.
Full version of wav2lip-onnx including face alignment and face enhancement and more...
3D Avatar Lip Synchronization from speech (JALI based face-rigging)
Learning Lip Sync of Obama from Speech Audio
Keras version of Syncnet, by Joon Son Chung and Andrew Zisserman.
AI Lip Syncing application, deployed on Streamlit
YerFace! A stupid facial performance capture engine for cartoon animation.
AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
Simple and fast wav2lip using new 256x256 resolution trained onnx-converted model for inference. Easy installation
simple and fast wav2lip using onnx models for face-detection and inference. Easy installation
A package for simple, expressive, and customizable text-to-speech with an animated face.
A Python GUI script designed to work with Rhubarb Lip Sync to create mouth animation fast and easy in just mere seconds (depending on video length)
Audio-Visual Lip Synthesis via Intermediate Landmark Representation
Zippy Talking Avatar uses Azure Cognitive Services and OpenAI API to generate text and speech. It is built with Next.js and Tailwind CSS. This avatar responds to user input by generating both text and speech, offering a dynamic and immersive user experience
Create deepfake video by just uploading the original video and specifying the text the character will read
Project page for FLOAT; Flow Matching for Audio-driven Talking Portrait Video Generation
Adventure Game Studio (AGS) module for lip sync
Godot audio processing.