kwea123 / nerf_pl

NeRF (Neural Radiance Fields) and NeRF in the Wild using pytorch-lightning

Home Page:https://www.youtube.com/playlist?list=PLDV2CyUo4q-K02pNEyDr7DYpTQuka3mbV

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Data explaination

013292 opened this issue · comments

commented

image
I've replicated the result mesh and saved it as a .vtp 3d model.
Now I'm trying to do the projection with these parameters to see if the projection could fit the 2d image.
But I cannot get the correct projection:
image
Here is my way to do the projection:

reader = vtk.vtkSTLReader()
reader.SetFileName('data/lego.stl')
reader.Update()
# Get the 3D points
polydata = reader.GetOutput()
points = np.asarray(polydata.GetPoints().GetData())
points_homog_0 = np.hstack((points, np.ones((len(points), 1))))

para = json_read('data/transforms_train.json')
view_angle = np.rad2deg(para['camera_angle_x'])
frame = para['frames'][0]
extrinsic_matrix = np.linalg.inv(frame['transform_matrix'])
extrinsic_matrix = extrinsic_matrix

pic_size = 800
focal_length = (1 / 2) / np.tan(np.radians(view_angle / 2))
intrinsic_matrix = np.array([[focal_length * pic_size, 0, pic_size / 2, 0],
                             [0, focal_length * pic_size, pic_size / 2, 0],
                             [0, 0, 1, 0],
                             [0, 0, 0, 1]])

project_matrix = intrinsic_matrix @ extrinsic_matrix
points_homog = np.dot(project_matrix, points_homog_0.T).T
points_proj = points_homog[:, :2] / points_homog[:, 2:3]

fig, ax = plt.subplots(1, 1)
image = np.asarray(Image.open('data/r_0.png'))
ax.imshow(image)
ax.scatter(points_proj[:, 0], points_proj[:, 1], s=0.1, c='r')
plt.show()

I'm confused about the meaning of each item in the .json, could you please give me a hint?
Thank you :-)