mutil GPU/ distributed training in pytorch 1.0
sisrfeng opened this issue · comments
sisrfeng commented
DeNA#23:
it says:
Although not optimized, you can use multi-GPUs if you insert the following code at
https://github.com/DeNA/PyTorch_YOLOv3/blob/master/train.py#L94
model = torch.nn.DataParallel(model)
Is it the same for this repo? Do we need to change other place if there is no error?
Many thank!
Motoki Kimura commented
I've never tried so I cannot say anything for sure.
I guess training with DataParallel will work maybe but the performance might be not so good as the implementation is not optimized for parallel training.