WongKinYiu / PyTorch_YOLOv4

PyTorch implementation of YOLOv4

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Multi GPU takes longer

yunxi1 opened this issue · comments

Hello, I use DDP mode to find that the training time of two GPUs is 26min and that of one GPU is 16min in an epoch. Do you know why?

batch size = 16
device = 0,1
GPU is TITAN RTX
this is config:
python -m torch. distributed. launch --nproc_ per_ node 2 train. py