"out of memory" , when execute kitti_submission
Ouya-Bytes opened this issue · comments
when execute kitti_submission.py file, it show error message:
`File "kitti_submission.py", line 162, in
evaluator.run()
File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "kitti_submission.py", line 104, in run
flow_3d_dense = knn_interpolation(
File "/workspace/CamLiFlow/models/utils.py", line 149, in knn_interpolation
knn_indices = k_nearest_neighbor(input_xyz, query_xyz, k) # [batch_size, n_queries, 3]
File "/workspace/CamLiFlow/models/csrc/wrapper.py", line 128, in k_nearest_neighbor
return _k_nearest_neighbor_py(input_xyz, query_xyz, k)
File "/workspace/CamLiFlow/models/csrc/wrapper.py", line 117, in _k_nearest_neighbor_py
dists = squared_distance(_query_xyz, _input_xyz)
File "/workspace/CamLiFlow/models/csrc/wrapper.py", line 50, in squared_distance
dist = -2 * torch.matmul(xyz1, xyz2.permute(0, 2, 1))
RuntimeError: CUDA out of memory. Tried to allocate 14.21 GiB (GPU 0; 22.38 GiB total capacity; 14.25 GiB already allocated; 7.34 GiB free; 14.29 GiB reserved in total by PyTorch)
I have set batch_size=1, but don't work. How much capacity when execute this script?
Could you give me some advice? Thank you!
According readme to compile CUDA extensions of csrc has solve this problem.
Yes, the CUDA extensions can save GPU memory significantly.