cvg / nice-slam

[CVPR'22] NICE-SLAM: Neural Implicit Scalable Encoding for SLAM

Home Page:https://pengsongyou.github.io/nice-slam

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

how to Run own RGB-D sequence from ROS

HongqingThomas opened this issue · comments

Hi, thank you very much for your great work first.

I'd like to replicate your code with my dataset subscribe from ROS. Currently, I have camera intrinsic matrix and RGB-D image stream and my question is how can we use them to generate the dataset and config/own.yaml?

I do notice that you have very detailed explanation of how to use RGB-D sequence from Kinect Azure, however, I'm so confused about how to run it with our own dataset not getting from Kinect Azure. You have mentioned that we can use Redwood tools, do you mind to specify it?

I'm assuming that since I have RGB-D images dataset, I do not need to Record and extract frames, as you mentioned in step 2, but how can I Run reconstruction as you mentioned in step 3 without getting /config.json?

What's more, I'm wondering how can I get the assuming gt_camera_pose and integrated.ply file in this way.

Thanks for your time and your significant contribution in SLAM community again! Looking for your answers!

Hey,
thanks for your interest in our work.
I just did a quick search and find this https://github.com/xdeng7/redwood_open3d_3dreconstruction that might be helpful.
After finishing the reconstruction, you can run src/tools/prep_own_data.py. Then run NICE-SLAM.
Good luck!