eriksandstroem / Point-SLAM

Point-SLAM: Dense Neural Point Cloud-based SLAM

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

When will this source code open?

SharineLee opened this issue · comments

Thank you for your great job!

When will this source code open?

@tfy14esa

Thank you for your interest in our project and apologies for the late reply.

At this point, it is difficult to say when exactly we can release the code, but as this becomes more clear, we will provide details here.

Any updates?

Thanks @renwuli for your interest! We are currently refactoring the code and want to release it in the best possible state. Therefore, it takes a bit longer. Apologies for the wait.

Hi @eriksandstroem
In the paragraph Point Adding Strategy in Section 3.1, you defined X and Y pixels, however, I didn't see any reference to X and Y. Are the unprojected depths from X or Y? What is the relationship between X and Y?

Hi @renwuli,
Thanks for your question. We define the values for X and Y in the implementation details.

We unproject X pixels (X is sampled uniformly in the image) into 3D and attempt to add them to the neural point cloud. We also have the option of unprojecting Y pixels (sampled in regions where the image gradient is high).

Does this help?

Hi @eriksandstroem,
Thanks for you reply. If you apply the option that uniformly sample X pixels, then does the Dynamic Resolution not applied? If it is true, what is the value of radius under this situation?

The dynamic radius is independent from the sampling of the X and Y pixels. The X and Y pixels simply determine the locations in the image from which we form rays into the scene and try to add three points along each ray (centered at the depth). We can apply a dynamic search radius for these pixels since we know the image gradient magnitude both for the X and Y pixels.

Does that help?

Hi @eriksandstroem
Also, in the end of the paragraph Point Adding Strategy in Section 3.1

Contrary to many voxelbased representations, it is not required to specify any scene bounds before the reconstruction.

And I have not find any bound specific parameters both in the story and implementation details.

Does it indicate that Point-SLAM can handle unbounded scenes? However, from my knowledge, Point-Nerf is not able to handle unbounded scenes. Have you tried to conduct experiments on KITTI dataset?