John D. Pope's starred repositories
Deep3DFaceReconstruction
Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set (CVPRW 2019)
VPGC_Pytorch
This is the PyTorch implementation of the Siggraph 2023 paper "Efficient Video Portrait Reenactment via Grid-based Codebook"
StyleSync_PyTorch
PyTorch implementation of "StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based Generator"
Lipreading_using_Temporal_Convolutional_Networks
ICASSP'22 Training Strategies for Improved Lip-Reading; ICASSP'21 Towards Practical Lipreading with Distilled and Efficient Models; ICASSP'20 Lipreading using Temporal Convolutional Networks
lipsynth-experiment
End-to-end pipeline generates speech from silent lip videos using LLMs and audio-visual cues, combining "AVI-Talking" and "Towards Accurate Lip-to-Speech Synthesis in-the-Wild" techniques, enabling synthesis from visual cues alone without audio/transcripts.
DiffSpeaker
This is the official repository for DiffSpeaker: Speech-Driven 3D Facial Animation with Diffusion Transformer
DiffPoseTalk
DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models
torch_packages_builder
Builder and index for PyTorch packages
MetaPortrait
[CVPR 2023] MetaPortrait: Identity-Preserving Talking Head Generation with Fast Personalized Adaptation
AdaSR-TalkingHead
ICASSP2024: Adaptive Super Resolution For One-Shot Talking-Head Generation
face-vid2vid
Unofficial implementation of One-Shot Free-View Neural Talking Head Synthesis
Linly-Talker
Digital Avatar Conversational System - Linly-Talker. πβ¨ Linly-Talker is an intelligent AI system that combines large language models (LLMs) with visual models to create a novel human-AI interaction method. π€π€ It integrates various technologies like Whisper, Linly, Microsoft Speech Services, and SadTalker talking head generation system. ππ¬
audio2photoreal
Code and dataset for photorealistic Codec Avatars driven from audio
Awesome-Talking-Head-Synthesis
π¬ An extensive collection of exceptional resources dedicated to the captivating world of talking face synthesis! β If you find this repo useful, please give it a star! π€©
All-in-One-Stable-Diffusion-Guide
A place to learn about Stable Diffusion
style2talker
[AAAI 2024] stle2talker - Official PyTorch Implementation
iPhoneCinematicDepthTo3D
This is a sample C# project that extracts Depth and Color information from videos shot in iPhone's Cinematic mode and outputs each as separate videos, along with a sample Unity project for 3D playback of these videos.
ROCT-Thunk-Interface
ROCm's Thunk Interface
OutfitAnyone
Outfit Anyone: Ultra-high quality virtual try-on for Any Clothing and Any Person
StreamingT2V
StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text
facefusion
Next generation face swapper and enhancer