imlixinyang / HiSD

Official pytorch implementation of paper "Image-to-image Translation via Hierarchical Style Disentanglement" (CVPR 2021 Oral).

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About Mutil-Gpus for training?

GreenLimeSia opened this issue · comments

commented

Hi, authors:

I found an issue when training the code with multi-GPUs, which is that only a single GPU is used despite inputting multiple GPUs. Can you solve this problem in your spare time?

Thanks!

Hi! The code is expected to support multi-gpu training now with DataParallel.
Can you share which command you used?

commented

I train the model with your code and your official command, i.e., python core/train.py --config configs/celeba-hq.yaml --gpus 0,1.
However, only one single GPU is used when training. The reason for this issue may be that Class "Gen" has no function of forward(self, ..). Can you help me?

commented

A link for this issue is in here . Can you solve this problem in your spare time? @imlixinyang
Thanks!

Try command "python core/train.py --config configs/celeba-hq.yaml --gpus 0 1".

commented

I will try it now. Thanks for your reply. I am doing a new work based on your novel work. I will cite your work.
Thanks for your hard work.

commented

It works now. Thanks again. @imlixinyang

Glad to hear that! Gook luck for your research.