AiuniAI / Unique3D

Official implementation of Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image

Home Page:https://wukailu.github.io/Unique3D/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to get better result?

kexul opened this issue · comments

commented

Most of my test cases failed miserably.
图片
图片

As the hint in the readme.md:

  1. Unique3D is sensitive to the facing direction of input images. Due to the distribution of the training data, orthographic front-facing images with a rest pose always lead to good reconstructions.
  2. Images with occlusions will cause worse reconstructions, since four views cannot cover the complete object. Images with fewer occlusions lead to better results.

For your first sample, the two wheels on the side are obscured and are not in front view. (Usually an image with a view evelation of 0 is best)
For the second one it's the same, evelation is not 0. But actually the more serious problem is that since this version uses multi-view independent normal prediction, normal prediction for objects like buildings, boxes, etc. often fails. We have fixed this by training a new model, which will be released later.

image

commented

"Usually an image with a view evelation of 0 is best"?

@wukailu Thanks for your suggestions, would you mind expand? Maybe a few example? I'm not sure what 'evelation' is.

"Usually an image with a view evelation of 0 is best"?

@wukailu Thanks for your suggestions, would you mind expand? Maybe a few example? I'm not sure what 'evelation' is.

elevation* basically dont take a photo from above or below. This works best with images that are straight on like the example above.

@wukailu With https://github.com/YvanYin/Metric3D being the SOTA for depth and normals would it be impossible add this to your project?

ll be released later.

when will the new version would be released! i look forward to the results