tensorlayer / HyperPose

Library for Fast and Flexible Human Pose Estimation

Home Page:https://hyperpose.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to get detection coordinates?

sambo55 opened this issue · comments

Hi,

Love the work, great performance. How can I access the detected keypoints?

Thanks

If you are using our C++ library, #298 (comment) may be helpful.

This will get your points printed:

        float imageWidth = 720.0;
        float imageHeight = 540.0;

        for (auto&& pose : pose_vectors[0]){
            std::cout <<  "Pose Start "<< std::endl;
            for(int i = 0; i < 18; i++ ){ //could be 19 for different models
            if(pose.parts[i].has_value)
                std::cout << "Point " << i<< " (" << int(pose.parts[i].x * imageWidth) << "," << int(pose.parts[i].y * imageHeight) << ") Score: " << int(pose.parts[i].score * 100) << "% | ";
            }
            std::cout <<  "Pose End "<< std::endl;
        }

Produces:

Pose Start
Point 1 (238,72) Score: 82% | Point 2 (250,72) Score: 78% | Point 3 (254,98) Score: 30% | Point 4 (234,108) Score: 17% | Point 5 (226,72) Score: 64% | Point 6 (207,87) Score: 70% | Point 7 (226,108) Score: 58% | Point 8 (250,131) Score: 67% | Point 9 (258,182) Score: 78% | Point 10 (258,245) Score: 65% | Point 11 (234,130) Score: 53% | Point 12 (223,162) Score: 72% | Point 13 (219,212) Score: 67% | Point 16 (246,49) Score: 40% | Point 17 (230,49) Score: 30% | Pose End
Pose Start
Point 0 (395,88) Score: 73% | Point 1 (434,215) Score: 50% | Point 2 (352,214) Score: 42% | Point 3 (277,273) Score: 21% | Point 4 (262,288) Score: 9% | Point 5 (520,217) Score: 39% | Point 6 (551,332) Score: 23% | Point 7 (453,334) Score: 10% | Point 14 (391,75) Score: 63% | Point 15 (418,75) Score: 77% | Point 17 (477,102) Score: 74% | Pose End 

Closed due to inactivity.

This will get your points printed:

        float imageWidth = 720.0;
        float imageHeight = 540.0;

        for (auto&& pose : pose_vectors[0]){
            std::cout <<  "Pose Start "<< std::endl;
            for(int i = 0; i < 18; i++ ){ //could be 19 for different models
            if(pose.parts[i].has_value)
                std::cout << "Point " << i<< " (" << int(pose.parts[i].x * imageWidth) << "," << int(pose.parts[i].y * imageHeight) << ") Score: " << int(pose.parts[i].score * 100) << "% | ";
            }
            std::cout <<  "Pose End "<< std::endl;
        }

Produces:

Pose Start
Point 1 (238,72) Score: 82% | Point 2 (250,72) Score: 78% | Point 3 (254,98) Score: 30% | Point 4 (234,108) Score: 17% | Point 5 (226,72) Score: 64% | Point 6 (207,87) Score: 70% | Point 7 (226,108) Score: 58% | Point 8 (250,131) Score: 67% | Point 9 (258,182) Score: 78% | Point 10 (258,245) Score: 65% | Point 11 (234,130) Score: 53% | Point 12 (223,162) Score: 72% | Point 13 (219,212) Score: 67% | Point 16 (246,49) Score: 40% | Point 17 (230,49) Score: 30% | Pose End
Pose Start
Point 0 (395,88) Score: 73% | Point 1 (434,215) Score: 50% | Point 2 (352,214) Score: 42% | Point 3 (277,273) Score: 21% | Point 4 (262,288) Score: 9% | Point 5 (520,217) Score: 39% | Point 6 (551,332) Score: 23% | Point 7 (453,334) Score: 10% | Point 14 (391,75) Score: 63% | Point 15 (418,75) Score: 77% | Point 17 (477,102) Score: 74% | Pose End 

Hello, I want to implement openpose and finally get the json file of the key points(https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/02_output.md#output-format), may I add the code there, I hope you can give some suggestions