alievk / npbg

Neural Point-Based Graphics

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The result doesn't show any color

KimMingyeol opened this issue · comments

commented

Hi
I printed a trained data of epoch 20 by using viewer.py, but there's an issue
As shown in the image below, it doesn't have any color
I think there should be something wrong with the process of fitting descriptor
Is there any solution for this?
Thanks.

그림1
그림2
그림3

Hi @KimMingyeol, can you please share with us your data (images and reconstruction) and commands you used to train and view the results?

commented

Hi @seva100, here I attach a zip file in google drive containing 1) original images (images folder) 2) images undistorted (images used for training) 3)cameras.xml 4) point_cloud.ply 5) scene.yaml 6) trained data (pointtexture, unet.. )
Google Drive : https://drive.google.com/file/d/1hesoVmz-yPpMdKnOvPDF0m-x5dLulz5L/view?usp=sharing

Though the trained data says epoch_1, I have tried over 30 epochs, which after all, remained showing dark blue colors. I couldn't find the original epoch_30 trained data. I think i accidentally emptied the log folder. So I'm just sending you epoch_1 data. (it's in fact not to far away from epoch_30 results either way)

I did not use the pretrained data (net_ckpt: None), and other than that I basically used all default commands posted on github.

Commands used for training:
python train.py --config configs/train_example.yaml --pipeline npbg.pipelines.ogl.TexturePipeline --dataset_names scene

Commands used for viewing:
python viewer.py --config /home/alex/Codes/npbg_photosets/Train_Jigok2/scene.yaml --checkpoint data/logs/epoch1/checkpoints/PointTexture_stage_0_epoch_1_scene.pth --origin-view

Finally, About getting the point cloud, I also just simply used the script included in your npbg repository, so I doubt there was a problem there.
It'd be nice to have a solution back asap.
Thanks.

@KimMingyeol, I can confirm the issue. I tried checking if everything is correct by the procedure outlined here, and the renderings, which should contain XYZ points colored as RGB, are all completely black. Most likely, it means that something is wrong with the camera poses in cameras.xml. Please try exporting the cameras from Agisoft Metashape again.

In the meanwhile, I'm trying to reconstruct and train your scene on my side.

Seems like either the cameras were wrong or the point cloud was not aligned with them. I've made a reconstruction based on your photographs, and NPBG trains pretty well with it:

mail_rec_trim.mp4

I've trained it for 30 epochs on a point cloud with 10x fewer points (this resulted in ~2 mln points). To make the results better, you can:

  • make a reconstruction of only some part of the scene (and provide NPBG with only the photos used for that reconstruction)
  • make zoom augmentations more aggressive, e.g. random_zoom: [0.3, 3.0] instead of random_zoom: [0.5, 2.0] in train_example.yaml
  • if a full cloud (~29 mln points on my side) fits into your GPU memory, it's better to use it

Here is the folder with cameras, reconstruction, configs, and learned NPBG parameters: https://drive.google.com/drive/folders/1LbrbKZDI2yFJUfnKoQ4fVENLTM4XbZcO?usp=sharing

commented

I checked projection images of point cloud for each camera pose through pyplot, and confirmed that the camera poses were inconsistent with corresponding images.
I'll train it again with your cameras.xml file.
Thanks for your help! I'll close this issue.