Continuing personal investigation into markerless facial motion capture techniques using consumer technologies and free open source software
An experimental pipeline was developed to drive the facial animationof a Metahuman rig in real-time using Unreal Engine 4.27. The system takes a single RGB source from a live camera stream or stored video file and leverages computer vision libraries to detect a human face. The motion of key facial features can then be sent to the game engine instance and mapped to animation Blueprints which were originally designed to take data from the Live Link Face app.
By removing the need for depth data from a device with a TrueDepth camera, this project opens up the possibility for live puppeteering of complex digital avatars without having to buy into the closed ecosystem of high-end Apple devices.
- Unreal Engine 4.27: Release Notes/Download
- TCP Plugin for UE4.27: GithubUnreal Marketplace
PyQt
: https://pypi.org/project/PyQt5/OpenCV
: https://opencv.org/dlib
: http://dlib.net/
PyQt3D
: https://pypi.org/project/PyQt3D/eos
: https://github.com/patrikhuber/eosMediapipe
: https://google.github.io/mediapipe/
###To Run Python GUI:
- Clone or download git repository
- Install dependencies listed in README if not already
- Download and unzip Resources.zip provided separately
- Add files from Resources/data into CAVE_FacialMocap/data
- Run
facial_mocap.py
###To Run Unreal Project:
- Install Unreal Engine 4.27
- To use my MH, add Georgie_FaceMesh.uasset from provided Resources and move into Content folder under CAVE_FacialMocap/UE4.27/Faces/Content/MetaHumans/Georgie/Face/
Tested on MacOS with Python 3.?
Tested on Windows 10 with Python 3.?