Weizhi-Zhong / IP_LAP

CVPR2023 talking face implementation for Identity-Preserving Talking Face Generation With Landmark and Appearance Priors

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Slow Training Speed on LRS2 Dataset with 4x RTX 4090 GPUs,(train_video_renderer.py)

Kiri0824 opened this issue · comments

commented

I attempted to run train_video_renderer.py on the LRS2 dataset using four RTX 4090 GPUs, but the training speed is exceptionally slow. In a previous issue, I noticed that the author suggested running approximately 300 epochs for optimal results. However, the speed I'm experiencing is much lower than expected. Does anyone has same issue?
image

commented

BTW, #12 , i see the author said: We stop training near 300 epochs, where FID is around 19, eval_gen_loss is around 7, and eval_warp_loss is around 11. ​But this training uses ref_N=3.
Here's my loss:
image
I didn't train 25 epoch, so others loss didn't decrease. But the eval_warp_loss is a little low, is that normal😢

commented

it's normal. i fix it😊

Can you share how you fixed it?

commented

Can you share how you fixed it?

sorry for a long time to reply, its just normal, just wait for a few days :)