xingyizhou / DeepModel

Code repository for Model-based Deep Hand Pose Estimation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

uvd convert to xyz coordination

hellojialee opened this issue · comments

Hi~ Thank you for your kind and great open source code. I have some issues in these following code:

xstart = int(math.floor((u * d / fx - cube_size / 2.) / d * fx))
xend = int(math.floor((u * d / fx + cube_size / 2.) / d * fx))
ystart = int(math.floor((v * d / fy - cube_size / 2.) / d * fy))
yend = int(math.floor((v * d / fy + cube_size / 2.) / d * fy))

in your code,
fx = 588.03
fy = 587.07

And I know fu,fv is decided because the original depth image is 640*480 pixel.

Are fx and fy decided by the camera? Where and How can I get them according to the NYU hand pose dataset? I'm a green hand and I haven't figure out the transforming between xyz and uvd coordination.
Could you pleas give me some help? Thank you!

Another issue: Is the joint xyz coordination in get h5dy data code also normalized to [-1,1] in your code just as the depth do?

Hi USTClj,
Yes, fx and fy are decided by the camera. It is obtained in convert_xyz_to_uvd.m file in the official NYU hand dataset. And yes, x, y, and z are all normalized to [-1, 1] as training target.