openai / point-e

Point cloud diffusion for 3D model synthesis

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Can't Reproduce Samples from Paper

joeyism opened this issue · comments

commented

Hi, thank you for your hard work.

I'm trying to reproduce the results from your paper but I can't seem to get it. I'm taking code from text2pointcloud.ipynb along with the captions mentioned in the paper but I can't see to generate the same results.

Captions from the paper:
Screenshot from 2023-01-04 23-47-13

Using the same caption a pair of 3d glasses, left lens is red right is blue, I get something like this instead:
3d-glasses

Is it possible to tell me where I went wrong, or if anyone else has ran into the same issue as well?
Thanks

I have the same issue. Shape of generated item is not right and the color is disordered.
image

I think the reason is that the authors did not provide large models for text->image generation (base40M provided vs 1B in paper). So if there is a plan to release the large pretrained model for text->image generation?