Anttwo / SuGaR

[CVPR 2024] Official PyTorch implementation of SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering

Home Page:https://anttwo.github.io/sugar/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Unable to reproduce the results from paper

ChengLyu opened this issue · comments

I tried to run the SuGaR pipeline with python train.py ... on the MipNerf datasets, below the first one is the mesh I got from running the code, the second one is quoted from the paper. Looks like the table plane I got is bumpier, the background mesh is a bit messier, and the floor mesh has more holes than what's shown in the paper. I run the SuGaR pipeline using all its default parameters, without any changes to the code. Is this expected? Or is there anything I missed? Thank you very much for the help!

image
image (1)

Hello @ChengLyu,

Please excuse me for my late answer, I've been very busy with a recent deadline.

For this scene, you can use the --postprocess_mesh as it will help remove some unnecessary vertices.
Actually, your mesh is pretty good, but the rendering engine makes it look quite bad as the lighting is not natural at all.

To get a better-looking result, the secret is to use the Cycle engine in Blender!
Blender is entirely free and open-source. It's the perfect tool for making beautiful renderings of meshes or point clouds.

For these images, I did the following:

  • I just used the default surface shader for all the materials in the scenes (the shader called Principled BSDF).
  • I used a white color for texture.
  • I follow some basic rules from photography for the lighting. I recommend checking for such basic but super useful practices in photography to obtain cool-looking renderings! For example, in almost all scenes, I used a 3-point lighting.
  • I increased the radius of the lights to get smoother and more realistic shadows.

I hope this message will help you!
Feel free to ask additional questions!