[CVPR 2024 Highlight (11.9%)] Learning Adaptive Spatial Coherent Correlations for Speech-Preserving Facial Expression Manipulation
The obtained MEAD dataset is first preprocessed with 'align_face.py':
python align_face.py
Paired image frames of the same speaker saying the same sentence with different emotions are recorded in aligned_path36.json
To train the model, run './trainer/train_asccl.py' with the preprocessed dataset path configured:
python ./trainer/train_asccl.py
NED:(code).
Integrate ASCCL into NED's training process:
- First follow NED's data preprocessing process to obtain training data and model parameters
- Replace the train.py file in NED's manipulator folder with the train_ned.py file