NVlabs / eg3d

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

When inferencing there is a problem with the rendering results using ShapeNet model

opened this issue · comments

Hello, I try to use the gen_sample.py code with the offical shapenet pre-trained model to inference some results. I found that the rendered image does not give a full view of the whole car,
p1 5707963267948966_y0
so I tried to change the camera radius when generating the camera pose to see the full view of the car from a more distant perspective. When I change the radius from 1.7 to 2.5, you can clearly see the distance does become farther, the field of view becomes larger
x
But I still can't see the full view of the car. So I set the radius to 2.8+, but the code rendered a pure white picture without anything.
p1 5707963267948966_y0
So I wonder why just change the camera radius, the rendering result can be such a big difference, and how to get the whole view of the car, thanks a lot~

I also tried the code "gen_video.py", it still can't present a full view of the car, how to get a 360 degree view of the car

interpolation.mp4

I have solved this problem, simply modify the start and end points of the corresponding sampling interval when rendering, the default end of the interval is a bit near so that when you change the camera origin may make the sampling area can not collect the real object

Hi Could you please specify a little bit more where you modified the code? Thanks in advance!

My solution for this issue is to change the intrinsics matrix, which is: focal_length = 1.7074
intrinsics = torch.tensor([[focal_length, 0, 0.5], [0, focal_length, 0.5], [0, 0, 1]], device=device)