alievk / npbg

Neural Point-Based Graphics

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Points descriptors

phongnhhn92 opened this issue · comments

Hello,
I would like to understand more about your descriptors. So if you train your network on 100 ScanNet scenes then you would have a set of 100 descriptors (one for each scene) and each descriptor would contains a set of N vectors (N points in the pointcloud). I wonder if my understand is correct or not.

Also I have another question about the two-stage learning. So in the pretraining stage, u guys use a set of scenes (set A) to training both descriptors and rendering network. Then in the fine-tuning stage, u guys zero out the descriptors and fine-tuned the rendering network to fit a new set of scene (set B), right ? My question is: after the fine-tuning stage, it is obvious the network can render novel view in set B but I wonder if the network can generalize to set A anymore ?

Correct. You can think about descriptors as point cloud colors. Each point cloud has its own N-dimensional colors and there is one neural network which interprets these colors across different scenes.

Thanks a lot !