Effortlessly sync audio and video with our cutting-edge AI model (LOL), backed by the Face Restoration AI (GFPGAN) for exceptional visual quality. It is a lip-sync perfection! 🎤💫
Video - (https://openinapp.co/5cwva)
Audio - (https://openinapp.co/o9vuj)
Output Video - (https://drive.google.com/drive/folders/1cXaXb-rW8m8Fn8-_ZmwAdAz7BZ2ly0ZF?usp=sharing)
Google Collab :
# Run the lip_syncing.ipynb file inside google collab!
# Don;t forget to change runtime type to gpu
Local System :
Follow the steps:
- Clone the Repository
Clone this GitHub repository to your local machine
- Download Pre-trained Models
Download the necessary pre-trained models:
Face Detection Model :
wget 'https://www.adrianbulat.com/downloads/python-fan/s3fd-619a316812.pth' -O './Lip-Sync-Repo/Wav2Lip/face_detection/detection/sfd/s3fd.pth'
Wave2Lip Model :
wget 'https://iiitaphyd-my.sharepoint.com/personal/radrabha_m_research_iiit_ac_in/_layouts/15/download.aspx?share=EdjI7bZlgApMqsVoEUUXpLsBxqXbn5z8VTmoxp55YNDcIA' -O './Lip-Sync-Repo/Wav2Lip/checkpoints/wav2lip_gan.pth'
-
install the requirement.txt
-
Install ffmpeg
-
Install Nvidia cuda
-
Install pytorch
-
Run Lip_Sync.py