Cartucho / vision_blender

A Blender addon for generating synthetic ground truth data for Computer Vision applications

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Read_data.py error

nnagururu opened this issue · comments

Hi,

I get the following output when running read_data.py from the sample code file

['optical_flow', 'segmentation_masks', 'intrinsic_mat', 'extrinsic_mat', 'normal_map', 'depth_map', 'object_pose_labels', 'object_pose_mats']
Camera intrinsic mat:
[[888.88888889 0. 319.5 ]
[ 0. 888.88888889 239.5 ]
[ 0. 0. 1. ]]

Camera extrinsic mat:
[[ 6.85920656e-01 7.27676332e-01 -4.01133171e-09 -7.88161851e-03]
[ 3.24013472e-01 -3.05420876e-01 -8.95395637e-01 -6.00126651e-02]
[-6.51558220e-01 6.14170372e-01 -4.45271403e-01 1.12561550e+01]]

Object poses:
Cube
[[ 6.85920656e-01 -7.27676332e-01 4.01133171e-09 -7.88161851e-03]
[ 3.24013472e-01 3.05420876e-01 8.95395637e-01 -6.00126651e-02]
[-6.51558220e-01 -6.14170372e-01 4.45271403e-01 1.12561550e+01]]
Traceback (most recent call last):
File "read_data.py", line 32, in
point_3d_cam = np.matmul(extrinsic_mat, point_3d)
ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 3 is different from 4)

I see that the matrix sizes for extrinsic_mat and point_3d are incompatible with multiplication. Am I doing something wrong?

I switched the order and it worked:

point_3d_cam = np.matmul(point_3d, extrinsic_mat)

Hi! We cannot change the order since when we want to project the 3D point into the camera coordinate frame.

I have just committed the changes to fix that issue, essentially I had to ensure that the 3D point was in homogeneous coordinates. Can you give it a try with the updated version?

Thank you for noticing this, very well spotted!

Haha, thank you for letting me know that I shouldn't do that, it works now!