GaussianTalker: Real-Time High-Fidelity Talking Head Synthesis with Audio-Driven 3D Gaussian Splatting
This is our official implementation of the paper
"GaussianTalker: Real-Time High-Fidelity Talking Head Synthesis with Audio-Driven 3D Gaussian Splatting"
For more information, please check out our Paper and our Project page.
We implemented & tested GaussianTalker with NVIDIA RTX 3090 and A6000 GPU.
Run the below codes for the environment setting. ( details are in requirements.txt )
git clone https://github.com/joungbinlee/GaussianTalker.git
cd GaussianTalker
git submodule update --init --recursive
conda create -n GaussianTalker python=3.7
conda activate GaussianTalker
pip install -r requirements.txt
pip install -e submodules/custom-bg-depth-diff-gaussian-rasterization
pip install -e submodules/simple-knn
We used talking portrait videos from AD-NeRF, GeneFace and HDTF dataset. These are static videos whose average length are about 3~5 minutes.
You can see an example video with the below line:
wget https://github.com/yerfor/GeneFace/releases/download/v1.1.0/May.zip
We also used SynObama for cross-driven setting inference.
Put "01_MorphableModel.mat" to data_utils/face_tracking/3DMM/
cd data_utils/face_tracking
python convert_BFM.py
cd ../../
python data_utils/process.py ${YOUR_DATASET_DIR}/${DATASET_NAME}/${DATASET_NAME}.mp4
├── (your dataset dir)
│ | (dataset name)
│ ├── gt_imgs
│ ├── 0.jpg
│ ├── 1.jgp
│ ├── 2.jgp
│ ├── ...
│ ├── ori_imgs
│ ├── 0.jpg
│ ├── 0.lms
│ ├── 1.jgp
│ ├── 1.lms
│ ├── ...
│ ├── parsing
│ ├── 0.png
│ ├── 1.png
│ ├── 2.png
│ ├── 3.png
│ ├── ...
│ ├── torso_imgs
│ ├── 0.png
│ ├── 1.png
│ ├── 2.png
│ ├── 3.png
│ ├── ...
│ ├── au.csv
│ ├── aud_ds.npy
│ ├── aud_novel.wav
│ ├── aud_train.wav
│ ├── aud.wav
│ ├── bc.jpg
│ ├── (dataset name).mp4
│ ├── track_params.pt
│ ├── transforms_train.json
│ ├── transforms_val.json
python train.py -s ${YOUR_DATASET_DIR}/${DATASET_NAME} --model_path ${YOUR_MODEL_DIR} --configs arguments/64_dim_1_transformer.py
Please adjust the batch size to match your GPU settings.
python render.py -s ${YOUR_DATASET_DIR}/${DATASET_NAME} --model_path ${YOUR_MODEL_DIR} --configs arguments/64_dim_1_transformer.py --iteration 10000 --batch 128