graphdeco-inria / gaussian-splatting

Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"

Home Page:https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Why is the reconstruction result not satisfactory?

jiangyijin opened this issue · comments

img000001 - 副本 - 副本
img000026 - 副本 - 副本
img000027 - 副本 - 副本
img000033 - 副本 - 副本
img000041 - 副本 - 副本
img000042 - 副本 - 副本
img000043 - 副本 - 副本
img000045 - 副本 - 副本
img000045 - 副本
img000054 - 副本
This is a portion of the dataset I used for reconstruction.

Below is the result of my reconstruction.
image
image
image
image
image

Sparse points
image

Does anyone know the specific reasons why reconstruction results may not be good? Or what should I do to achieve better reconstruction results?

You can try to get more pictures or get higher-quality pictures. If you have a low number of pictures, or low-quality pictures try to increase the number of iterations. Also, you may try to create the COLMAP database in different software for example in Agisoft metashape. I would recommend post-processing the images in some software to get better images.
I had a similar problem when I had a very detailed object and it helped me.

You can try to get more pictures or get higher-quality pictures. If you have a low number of pictures, or low-quality pictures try to increase the number of iterations. Also, you may try to create the COLMAP database in different software for example in Agisoft metashape. I would recommend post-processing the images in some software to get better images. I had a similar problem when I had a very detailed object and it helped me.

Thank you very much for your response. I used colmap for the 3D reconstruction. I was aiming to address the issue of image exposure, so I would like to understand how to modify gaussian-splatting to tackle this problem. My dataset consists of over 2000 images.

GS literally put inside all what you put. So it is a better to select your pictures. Select sharp good quality pictures. No need for high resolutions. I got good results from res under 1600 pix. Usually I use x2 size that will be uses in the training.

GS literally put inside all what you put. So it is a better to select your pictures. Select sharp good quality pictures. No need for high resolutions. I got good results from res under 1600 pix. Usually I use x2 size that will be uses in the training.GS 从字面上把你放的所有东西都放进去。因此,最好选择您的图片。选择清晰、高质量的图片。无需高分辨率。我从 1600 像素以下的分辨率中获得了良好的结果。通常我使用将在培训中使用的 x2 大小。
Thank you very much for your response. Currently, I am working on achieving three-dimensional reconstruction of extreme images. Therefore, I am interested in knowing if there are any areas where GS can be modified to address this issue.

There is small group software that can 'edit' GS. Usually you need only 'trim' (use Blender plugin or websites like https://playcanvas.com/supersplat/editor ), because all colours and light comes from input pictures and are 'baked' to the model.
This is great shortcut from normal methods, but also a little dead end. (for now)

Re-coloring and re-lighting you can do in games engine like Unity or Unreal Engine via plugins, but also this isn't GS anymore and you lose some quality.
(bonus: Babylon 7 is js game engine https://www.babylonjs.com/ that use GS without plugin)

Converting to mesh is problematic for now. Look for 'Sugar' https://anttwo.github.io/sugar/ - it is promising