maum-ai / faceshifter

Unofficial PyTorch Implementation for FaceShifter (https://arxiv.org/abs/1912.13457)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

multi GPU training

hanikh opened this issue · comments

hi, thanks for your great code.
I have a problem when using 2 GPUs.
with 1 GPU the speed of the training process is about 0.75 s/it (according to progress bar)
and with 2 GPUs it is about 1.33 s/it. and since the whole iterations are halved with 2 GPUs, consequently, one epoch would take almost the same time in both cases (1 and 2 GPUs)
would you please help me to find out what the problem is.
thanks alot

This is not an error nor an issue, I think. In multi GPU training pytorch-lightning use ddp. I recommend you to read this