There are 14 repositories under wav2lip topic.
PaddlePaddle GAN library, including lots of interesting applications like First-Order motion transfer, Wav2Lip, picture repair, image editing, photo2cartoon, image style transfer, GPEN, and so on.
Real time interactive streaming digital human
AigcPanel 是一个简单易用的一站式AI数字人系统,支持视频合成、声音合成、声音克隆,简化本地模型管理、一键导入和使用AI模型。
Wav2Lip UHQ extension for Automatic1111
PyTorch Implementation for Paper "Emotionally Enhanced Talking Face Generation" (ICCVW'23 and ACM-MMW'23)
Alternative to Flawless AI's TrueSync. Make lips in video match provided audio using the power of Wav2Lip and GFPGAN.
lipsync is a simple and updated Python library for lip synchronization, based on Wav2Lip. It synchronizes lips in videos and images based on provided audio, supports CPU/CUDA, and uses caching for faster processing.
(Windows/Linux/MacOS) Local WebUI with neural network models (Text, Image, Video, 3D, Audio) on python (Gradio interface). Translated on 3 languages
Orchestrating AI for stunning lip-synced videos. Effortless workflow, exceptional results, all in one place.
AI Lip Syncing application, deployed on Streamlit
AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
Create deepfake video by just uploading the original video and specifying the text the character will read
GUI to sync video mouth movements to match audio, utilizing wav2lip-hq. Completed as part of a technical interview.
Virtual news production using Tacotron2 and Wav2Lip
IN4U - 면접 연습 웹 서비스
通過少量語音與影片樣本生成高質量的語音與影片克隆 ( AI 人像口白生成 ),並提供多種音頻處理技術來提升音質和真實感。
AIStreameur : Faite streamer vos personne préféré !
This project is dedicated to advancing the field of animatronic robots by enabling them to generate lifelike facial expressions, pushing the boundaries of what's possible in human-robot interaction.
The StreamFastWav2lipHQ is a near real-time speech-to-lip synthesis system using Wav2Lip and lip enhancer can be used for streaming applications.
The LipSync-Wav2Lip-Project repository is a comprehensive solution for achieving lip synchronization in videos using the Wav2Lip deep learning model. This open-source project includes code that enables users to seamlessly synchronize lip movements with audio tracks.
포스코 청년 AI·Big Data 아카데미 - AI 프로젝트
This repository demonstrates the use of the powerful Wav2Lip model to synchronize lip movements with speech in videos. The deep learning model generates human-like accurate lip sync that enhances the visual appeal of any talking-head video.
This repository hosts the code used by Apollo during Wav2Lip's inference process.
Voice to LipSync : High-Quality Lip Sync Video Genreation OpenVoice to Generative a Zero-Shot TTS and Wav2Lip to Generative Lip Sync Video.
Version of original Wav2Lip quick trial notebook working directly in Drive folder and fixing a naming error
A wrapper around the Wav2Lip algorithm using Django.
Execution of famous wav2lip in google colab environment