jh-gglabs's repositories
avatars4all
Live real-time avatars from your webcam in the browser. No dedicated hardware or software installation needed. A pure Google Colab wrapper for live First-order-motion-model, aka Avatarify in the browser. And other Colabs providing an accessible interface for using FOMM, Wav2Lip and Liquid-warping-GAN with your own media and a rich GUI.
Bandai-Namco-Research-Motiondataset
This repository provides motion datasets collected by Bandai Namco Research Inc
BlazeFace-TFLite-Inference
Python scripts to detect faces in Python with the BlazeFace Tensorflow Lite models
Co-Speech-Motion-Generation
Freeform Body Motion Generation from Speech
CodeTalker
[CVPR 2023] CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior
DiffuseStyleGesture
DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models (IJCAI 2023) | The DiffuseStyleGesture+ entry to the GENEA Challenge 2023 (ICMI 2023, Reproducibility Award)
EmoTalk_release
This is the official source for our ICCV 2023 paper "EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation"
facial-animation
Audio-driven facial animation generator with BiLSTM used for transcribing the speech and web interface displaying the avatar and the animation
FBX2glTF
A command-line tool for the conversion of 3D model assets on the FBX file format to the glTF file format.
LiveSpeechPortraits
Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation (SIGGRAPH Asia 2021)
Mediapipe-Halloween-Examples
Python scripts using the Mediapipe models for Halloween.
mediapipe_face_iris_cpp
Real-time Face and Iris Landmarks Detection using C++
PantoMatrix
PantoMatrix: Co-Speech Talking Head and Gestures Generation
QPGesture
QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation (CVPR 2023 Highlight)
SAiD
SAiD: Blendshape-based Audio-Driven Speech Animation with Diffusion
speech2affective_gestures
This is the official implementation of the paper "Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning".
SURF-GAN
Official Pytorch implementation of "Injecting 3D Perception of Controllable NeRF-GAN into StyleGAN for Editable Portrait Image Synthesis", ECCV 2022
SysMocap
A real-time motion capture system for 3D virtual character animating.
ubisoft-laforge-ZeroEGGS
All about ZeroEggs
UnifiedGesture
UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons (ACM MM 2023 Oral)
youtube-gesture-dataset
This repository contains scripts to build Youtube Gesture Dataset.