cvg / nice-slam

[CVPR'22] NICE-SLAM: Neural Implicit Scalable Encoding for SLAM

Home Page:https://pengsongyou.github.io/nice-slam

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Not enough memory RTX 2060

YerldSHO opened this issue · comments

python -W ignore run.py configs/Demo/demo.yaml
INFO: The output folder is output/Demo
INFO: The GT, generated and residual depth/color images can be found under output/Demo/vis/
INFO: The mesh can be found under output/Demo/mesh/
INFO: The checkpoint can be found under output/Demo/ckpt/
Tracking Frame 1: 0%| | 1/500 [00:09<1:19:10, 9.52s/it]Process Process-2:
Traceback (most recent call last):
File "/home/alex/miniconda3/envs/nice-slam/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/alex/miniconda3/envs/nice-slam/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/alex/projects/nice-slam/src/NICE_SLAM.py", line 276, in mapping
self.mapper.run()
File "/home/alex/projects/nice-slam/src/Mapper.py", line 606, in run
gt_c2w, self.keyframe_dict, self.keyframe_list, cur_c2w=cur_c2w)
File "/home/alex/projects/nice-slam/src/Mapper.py", line 503, in optimize_map
loss.backward(retain_graph=False)
File "/home/alex/miniconda3/envs/nice-slam/lib/python3.7/site-packages/torch/_tensor.py", line 363, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/alex/miniconda3/envs/nice-slam/lib/python3.7/site-packages/torch/autograd/init.py", line 175, in backward
allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 5.78 GiB total capacity; 253.02 MiB already allocated; 6.50 MiB free; 296.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
[W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
Tracking Frame 1: 0%| | 1/500 [00:12<1:45:18, 12.66s/it]
Process Process-1:
Traceback (most recent call last):
File "/home/alex/miniconda3/envs/nice-slam/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/alex/miniconda3/envs/nice-slam/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/alex/projects/nice-slam/src/NICE_SLAM.py", line 266, in tracking
self.tracker.run()
File "/home/alex/projects/nice-slam/src/Tracker.py", line 233, in run
camera_tensor, gt_color, gt_depth, self.tracking_pixels, optimizer_camera)
File "/home/alex/projects/nice-slam/src/Tracker.py", line 107, in optimize_cam_in_batch
self.c, self.decoders, batch_rays_d, batch_rays_o, self.device, stage='color', gt_depth=batch_gt_depth)
File "/home/alex/projects/nice-slam/src/utils/Renderer.py", line 176, in render_batch_ray
raw = self.eval_points(pointsf, decoders, c, stage, device)
File "/home/alex/projects/nice-slam/src/utils/Renderer.py", line 60, in eval_points
ret = torch.cat(rets, dim=0)
RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 5.78 GiB total capacity; 372.05 MiB already allocated; 6.50 MiB free; 420.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Hi, I'm using a rtx 2060 6GB video card. The visualizer starts, but I can’t run the nice-slam demo training, it shows problems with the amount of memory, although the video card has 6GB. What can you do about it? I tried on google colab (Nvidia Tesla T4), but there the code just runs and does not produce a result (nothing)

Hi, maybe there were some other applications also consuming the GPU memory? e.g. browser The viewer will not work on the server, need to use a desktop/laptop.