timctho / convolutional-pose-machines-tensorflow

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Dear Author,how can I run the demo_cpm_hand with a video?

lilyswang opened this issue · comments

Dear Author,how can I run the demo_cpm_hand with a video? thx

Hi, I am also trying to do that, have you found a way to succeed about it? Could you share, or anyone else, who know how to give video as input and get an appropriate output, can share?

Hi @lilyswang and @DilaraSina ,

In theory its possible. In this repo, Kalman filter is implemented to track the hands and the initial position of the hand is assumed to be in the center of the frame. If in the input video, the hand initially or in any time frame is in the center then it could be detected. However, there is another approach which one can use and can be achieved by using the following steps.

  1. Detect the hand in the frame (one can use body pose estimation or other hand detection algorithms).
  2. Crop, zoom and pad the hand area such that the hand in the cropped image should cover max area and the size of the cropped image should be 256 x 256 pixels.
  3. Pass it to the joint point detector.

Hope it will help you :)

Thanks,
Asif

Thanks for your answer. In Readme file, it is said that 'You can also use video files like .avi, .mp4, .flv', as my understanding we can pass files with these extension and get an output, how can we do that?

Hi @DilaraSina,

If you just use the model from the code then you can pass any image you want to get the output. For e.g. read the video using OpenCV lib and pass each image to the model with criteria needed for the model i.e. 256 x 256 cropped image of the hand image with max area covered is hand.

Many Thanks,

Asif