cvg / nice-slam

[CVPR'22] NICE-SLAM: Neural Implicit Scalable Encoding for SLAM

Home Page:https://pengsongyou.github.io/nice-slam

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

your own captured appartment dataset

Yiiii19 opened this issue · comments

Firstly thanks you so much for this amazing work and code! i have two questions about your own captured appartment dataset:

  1. you just use Kinect camera for taking video and open3D for doing the reconstruction? How could the result "integrated.ply" be so great? Cause i was using Intel Realsense camera and open3D to do the reconstruction, the integrated result is much worse than yours. So I would like to ask if you did any config or preprocessing? (Camer will not change the output foler layout of open3D is same)

  2. I wanted to run the code "python -W ignore run.py configs/Apartment/apartment.yaml" but it seems to take longer time. If you have the reconstruction result from open3D, what is the purpose to run the nice-slim? Sorry about the naive question, cause I thought nice-slam is for the reconstruction task to "replace" open3D

Thank you so much!

Hi,
Thanks a lot for your interest in our work.

  1. I did not modify anything, everything is open3d's Redwood default pipeline.
  2. There is no need to first run open3D and then run nice-slam, you can directly run nice-slam after capturing a video sequence.

Thanks for your reply!
I still have one question about "Specify the bound of the scene." of custom dataset. I donot have the ground truth camera pose, how did you construct the world coordinates on the first frame?

In NICE_SLAM.py you have the "load_bound" function, which requires bound from custom yaml file.
self.bound = torch.from_numpy( np.array(cfg['mapping']['bound'])*self.scale)

I understand you recommend to run open3D firstly to get the bound. However, from your answer above, I can directly rund nice-slam. Then How should I set the bound (no ground truth pose) in yaml file or somewhere else?

Thank yu so much.