cvg / nice-slam

[CVPR'22] NICE-SLAM: Neural Implicit Scalable Encoding for SLAM

Home Page:https://pengsongyou.github.io/nice-slam

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Data parallelism with pytorch

mattiapiz opened this issue · comments

Hii, as I read in most of the previous issues, the CUDA out of memory is quite frequent. I encountered the same problem. In my set up I have two GPU Quadro RTX 4000 with 8gb of RAMs. So my question is, is it possible to let use pytorch both the GPUs during the mapping phase, so inside the Mapper.py file? I tried but I didn't find any torch.device() so I don't how Pytorch assign its computation to the specific GPU. I have already tried to split just the Mapping on one GPU and the Tracking on the other one, but got the same error.

Hi, actually I also tried running the tracking and mapping on different GPUs before, just need to change the gpu device in the config file, e.g. for tracking cuda:0, for mapping cuda:1.