Get a inference result for two arbitrary input point clouds
1eethink opened this issue · comments
Hi,
How can I modify the code to have only two point clouds as input and get output result as a scene flow of those two point clouds? I don't need any ground masking or pose because I will assume that camera stays at the same location and there will be no background for the input point clouds.
I'm asking because even though I tried my best to change the code, there are too many arguments and parts I have to modify.
Thank you.
I believe this one is related to: #3 (comment)
Option A: Save your data as h5df file: Please check the code attached there, and save the data like the same way, then run the same in the instructions.
Option B: read directly, here is a branch I did for DynamicMap benchmark where dataloader directly read each PCD file etc.
DeFlow/scripts/network/dataloader.py
Lines 70 to 123 in f8652b3
Hi,
Thanks for the very quick reply. Sorry for the basic question, I'm new to this field. According to your reply, I'm still confused how can I deal with ground masking. It seems like you used some configuration files (xxx.toml) to make "groundseg". What if I don't have any configuration files, but fortunately having only point clouds without backgrounds? I tried to erase relevant codes for "gm" and "pose", but I failed. Is there no way to deal with res_dict having only two keys (i.e., pc0, pc1)?
Thank you.
I set res_dict like below as you mentioned:
res_dict = {
'scene_id': 'scene_id',
'timestamp': 'key',
'pc0': torch.tensor(self.pc0),
'pc1': torch.tensor(self.pc1),
'pose0': torch.tensor(np.eye(4)),
'pose1': torch.tensor(np.eye(4)),
'gm0': torch.tensor(np.zeros(np.shape(self.pc0))),
'gm1': torch.tensor(np.zeros(np.shape(self.pc1))),
}
But the error says like below:
File "/home/kin/workspace/DeFlow/scripts/pl_model.py", line 230, in test_step
batch['pc0'] = batch['pc0'][~batch['gm0']].unsqueeze(0)
TypeError: ~ (operator.invert) is only implemented on integer and Boolean-type tensors
Did I miss something?
'gm0': torch.tensor(np.zeros(np.shape(self.pc0)).astype(np.bool_)),
'gm1': torch.tensor(np.zeros(np.shape(self.pc1)).astype(np.bool_)),
It's weird.. It leads to another error:
File "/opt/mambaforge/envs/deflow/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/kin/workspace/DeFlow/scripts/network/models/basic/make_voxels.py", line 65, in forward
not_nan_mask = ~torch.isnan(batch_points).any(dim=1)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
better to debug by yourself with your data, like set a breakpoint and check the shape of data.
Okay, I'll try it. But, do you assume that there could be nan values in point cloud data?
no, you should have valid points in data (like at least 1 point in data is valid)... otherwise it doesn't make sense.
Let me know if you find out why that happens.
I found why that happend, but still can't resolve it. The reason is that when the model voxelize the input point clouds, it returns all zero values.
in deflow.py file:
self.timer[1].start("Voxelization")
pc0_before_pseudoimages, pc0_voxel_infos_lst = self.embedder(pc0s)
pc1_before_pseudoimages, pc1_voxel_infos_lst = self.embedder(pc1s)
self.timer[1].stop()
Can you help me to figure it out? My inputs are like below:
{'pc0': tensor([[[211., 63., 61.],
[210., 63., 63.],
[211., 62., 63.],
...,
[325., 904., 136.],
[321., 904., 144.],
[320., 906., 144.]]], device='cuda:0'), 'pc1': tensor([[[191., 63., 336.],
[191., 63., 337.],
[190., 62., 339.],
...,
[111., 673., 512.],
[110., 674., 512.],
[112., 672., 512.]]], device='cuda:0'), 'pose0': tensor([[[1., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 1.]]], device='cuda:0', dtype=torch.float64), 'pose1': tensor([[[1., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 1.]]], device='cuda:0', dtype=torch.float64), 'gm0': tensor([[[False, False, False],
[False, False, False],
[False, False, False],
...,
[False, False, False],
[False, False, False],
[False, False, False]]], device='cuda:0'), 'gm1': tensor([[[False, False, False],
[False, False, False],
[False, False, False],
...,
[False, False, False],
[False, False, False],
[False, False, False]]], device='cuda:0')}
In other words, how can I deal with if I have different range of "point_cloud_range"? (I think this is the main difference)
There are several potential issues I notice here,
gm
shape should be (N,), so maybe replace the initial to np.zeros(pc.shape[0]) etc.- the pose is the sensor center pose, which means (0,0,0) coord is sensor pose.
- for range limit, here is the config:
Lines 37 to 38 in 6f0321a
if you change point cloud range and voxel size here, not 512x512x1 you need retrain for your new setting.