xingyizhou / pytorch-pose-hg-3d

PyTorch implementation for 3D human pose estimation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

the 3d loss

JWSunny opened this issue · comments

commented

hello:

Thanks for your sharing about the work. In the code(such as h36m.py and train3d.py), i don't understand the 'reg_ind' and 'reg_target', can you explain the parameters and why use them to calculate the 3d pose loss, is there any bais or reference? thanks.

Hi,
Thanks for looking at the code. These are good questions.

  1. output[-1]['depth'] is in shape batch x 16 x H x W. The code first extract the value at the ground truth locations (batch['reg_ind']) to convert it into batch x 16 x 1. And then they are the same as directly using regression. The code is from CornerNet.
  2. varloss is the eq. 3 in the paper.
commented

@xingyizhou In other words,the locations (batch['reg_ind']) values and the regression depth(batch['reg_target']) have some relationships? from the paper: ConerNet?

Yes.

commented

@xingyizhou hello, in the code h36m.py, self.mean_bone_length=4000, but i saw other places are possible 4200mm, i want to how to calculate the value; for example, (x,y,x), x, y are the locations of pixels in the RGB image or in the real scene, but the z is the depth value of the real scene; can you give some explanations, thanks.