airsplay / R2R-EnvDrop

PyTorch Code of NAACL 2019 paper "Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Multiple GPUs

wangqian621 opened this issue · comments

How to use multiple GPUs for training? Do I need to write the function of setting multiple GPUs into the model?

You can try with distributed data parallel: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html. However, since different nav rollouts have different length and PyTorch DDP uses synchronized update, the speed of each step would be equal the the slowest one.