Harry-Zhi / semantic_nerf

The implementation of "In-Place Scene Labelling and Understanding with Implicit Scene Representation" [ICCV 2021].

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Custom dataset

Iana-Zhura opened this issue · comments

Hello!
Thank you a lot for your great work!
I am planning to use your method for robotic application and I have a question. Do you have any guide how to prepare my own data for training and rendering?
As I understood the input includes 2D color images, depth images, camera poses and sematic class? Am I correct? These data should be stored as a dataset?

Thank you in advance!

Not to take away from this repo, but @Iana-Zhura there's also an implementation in NeRF Studios that may be of assistance to you.

Hi @Iana-Zhura ,

Thanks for your interests and sorry for late reply due to recent DDL.

The network training requires colour images, camera poses ( as well as intrinsics) and semantic labels (either dense complete ones, or noisy ones).

If you want to start quickly , you could try object-level scenes which NeRF has done with some your manual annotations using tools such as labelme. Or you can start from some indoor datasets like ScanNet which has all of these data available, but the data quality may be less satisfying due to real-world captures.

In my paper, I also render colours+labels+poses in Replica dataset which also has a descent data quality of indoor scenes. I render these images using habitat-sim . Specifically, I pre-generate some render camera trajctories within the scene and put a virtual camera/agent of habitat-sim following each of generated camera poses to generate a series of images.

Hope this helps.

NeRF Stuidio that @Jordan-Pierce mentioned is also an amazing repo where you could quick get what NeRF does. Thanks @Jordan-Pierce for mentioning this.

Hi @Harry-Zhi and @Jordan-Pierce !
I will for sure try NeRF Studio.

Thank you a lot for your advices!

Close it for now. Feel free to re-open it if you have any further questions.