UMass-Foundation-Model / 3D-LLM

Code for 3D-LLM: Injecting the 3D World into Large Language Models

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How is scene data rendered from point clouds into images before three steps?

SwimZhang opened this issue · comments

How is scene data rendered from point clouds into images before three steps ? The project only explains the rendering method of objaverse.

Can we use the same code and the same config parameters in the code to render scene data as objaverse ???

For Scannet,
You could download the images directly from https://kaldir.vc.in.tum.de/scannet_benchmark/documentation. (scannet_frames_25k)

For HM3D,
You could directly use the images in 3DMV-VQA dataset, or use the script https://github.com/evelinehong/3D-CLR-Official/blob/main/data_engine/render_hm3d.py to collect images data.

(the intuition here is, these scene datasets/ scans are collected by fusing RGBD images. So we could directly use these RGBD images for reconstruction)