stevenjj / openpose_ros

A ros wrapper for the CMU openpose library

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Failed to call service detect_poses

erolley-parnell opened this issue · comments

Hi,

I'm having some problems with the Skeleton_pose_3d. The wrapper works absolutely fine in 2D. At the moment the problem is that the detect_poses service is not being recognised. I check that the detect_poses rosservice is being made at all and it is, but the skeleton_detect_3d node cannot find it.

I first run in two separate windows:
rosrun skeleton_extract_3d skeleton_extract_3d_node
rosrun openpose_ros_pkg openpose_ros_node

I get this error:

Cloud and Image Messages Received!
[ INFO] [1509973248.665907632]: Cloud Time Stamp: 1509973248.593084
[ INFO] [1509973248.665932901]: Image Time Stamp: 1509973248.599147
[ERROR] [1509973248.668825664]: Failed to call service detect_poses

Because I have been trying to implement this on my own PC I have had to change a few things but would you mind telling me which files I should check to fix this error? I am using the dkanou branch at the moment but as a new member of GitHub, I'm not sure if you can access this.

Thanks for any help you might be able to give.

Using the correct launcher

I see that fork, and not much has changed, so my comments below would still apply. The correct way to use the skeleton 3d node is to run the following command in a single terminal:

roslaunch skeleton_extract_3d openpose_skeleton_extract.launch 

which launches the necessary nodes and makes the remapping of names.


Node and service/client name mismatch

If this fixes your problem, here are two issues with your current approach. If you're running the nodes separately, (1) the wrong node is being used and (2) the service and client names are not the same.

For (1) the launch file actually uses the node openpose_ros_node_3d and not just openpose_ros_node as seen here:

<node name="openpose_ros_node_3d" pkg="openpose_ros_pkg" type="openpose_ros_node_3d" output="screen" />

For (2), in the skeleton_extract_3d/src/skeleton_extract_3d_node.cpp, you can see that it looks for a client called skeleton_2d_detector:

    client = nh.serviceClient<openpose_ros_msgs::GetPersons>("skeleton_2d_detector");

However, the real service is actually called detect_3d_poses which is initialized in
openpose_ros/openpose_ros_pkg/src/openpose_ros_node_3d.cpp

  ros::ServiceServer service = nh.advertiseService("detect_poses_3d", detectPosesCallback);

While they have different service and client names, the launch file changes this via:

    <remap from="~skeleton_2d_detector" to="detect_poses_3d" />

This may have been difficult to track as the ROS_INFO message looks for detect_poses and not detect_poses_3d Furthermore it did not tell you to fix the client names. Since this node was a collaborative work, some debug messages were never changed. I will change the debug message to make this clearer to future users.

As for why it is set up this way, remapping lets you test/use different ROS services without changing the source file. In this case, it's not necessary, but it's always useful to have.

Let me know if it works or not.

So I had attempted using the roslaunch file before now and it always stopped at:

...
Ignoring source layer loss_stage6_L1
I1106 16:42:42.020645 23480 net.cpp:744] Ignoring source layer weight_stage6_L2
I1106 16:42:42.020648 23480 net.cpp:744] Ignoring source layer loss_stage6_L2
[ INFO] [1509982962.037193215]: Initialization Successful!

At which point I assumed that the problems would be within individual nodes (Hence me trying to run it separately). Given this was not the problem, I still need to figure out why the launch stops here as opposed to giving me the x, y, and z values as shown in your instructions. All of the camera topics are being published and can be viewed in rviz just fine. Any ideas?

Thank you for all your hard work!

I think it's working properly. The launch file suppresses the output of skeleton_extract_3d_node

Why don't you change this line:

<node name="skeleton_extract_3d_node" pkg="skeleton_extract_3d" type="skeleton_extract_3d_node" >

to

<node name="skeleton_extract_3d_node" pkg="skeleton_extract_3d" type="skeleton_extract_3d_node" output="screen" >

So you can see if you are getting callbacks for the point cloud and images.

After changing this line, the output changes to this:

...
I1107 11:25:56.572057 21295 net.cpp:744] Ignoring source layer loss_stage6_L2
[ INFO] [1510050356.588497653]: Initialization Successful!
[ INFO] [1510050356.588870209]: detectPosesCallback
[ INFO] [1510050356.589182663]: Parsed image
[ INFO] [1510050356.589197729]: Perform forward pass with the following settings:
[ INFO] [1510050356.589208062]: - net_input_size: 640 480
[ INFO] [1510050356.589218914]: - num_scales: 1
[ INFO] [1510050356.589237084]: - scale_gap: 0.3
[ INFO] [1510050356.589250038]: - image_size: [640 x 480]
[ INFO] [1510050356.589268574]: Initialized Net Size
[ INFO] [1510050356.715981972]: Preparing for forward pass
[ INFO] [1510050357.014725244]: g_pose_extractor->forwardPass done
init done
opengl support available

at which point it opens a visualiser titled OpenPose Example, and takes a single frame input from the camera and then stops. If the process is left running for a short while, it produces this error:

[skeleton_extract_3d_node-2] process has died [pid 6120, exit code -9, cmd /data/emily/catkin_ws/src/openpose_ros/devel/lib/skeleton_extract_3d/skeleton_extract_3d_node ~point_cloud:=/camera/depth/points ~image:=/camera/rgb/image_raw ~skeleton_2d_detector:=detect_poses_3d __name:=skeleton_extract_3d_node __log:=/data/emily/.ros/log/fc61b18e-c3a5-11e7-9627-38d54778323c/skeleton_extract_3d_node-2.log].
log file: /data/emily/.ros/log/fc61b18e-c3a5-11e7-9627-38d54778323c/skeleton_extract_3d_node-2*.log

But I don't find anything within the log file.

Some ROS_INFO statements I made show that each of the publishers are declared and it does enter the callback.

It looks like it went through the neural net just fine. The issue is on processing the point cloud.

I would try changing the topics of the pointcloud and images. To start, maybe use this point cloud topic (assuming you're using a Kinect based on your launch file)

camera/depth_registered/points

The 3D extractor works by correlating the index position of the image and the depth registered points. I.e: each pixel of the the image has a corresponding 3D point but sometimes a pixel may correspond to an invalid point, so it will only use valid points. In any case, the node expects a rectified point cloud which has the same dimensions (resolution) as the input image and extracts the corresponding 3D points.

Also, the speed seems a little slow. Are you using a GPU? Does the 2D tracker seem to be real time for you?

I'm using an Asus camera instead of a Kinect but openni2 produces the same outputs. I've changed the depth to depth_registered and it doesn't seem to make any difference really. It only uses one frame and the error that appears is the same one and the log file is still empty.

On a GPU the original openpose code worked a little bit slower than real time but anything where the wrapper has been about 1 FPS the whole time.

I think I may have found your problem. I looked at your forked repo and noticed that your neural net still outputs 1024x1024. This is the setting for the camera that I was using. If this value is not changed, the indices given by the service will be out of bounds so the skeleton extractor node will access an invalid pixel index.

Go to your openpose_ros/openpose_ros_pkg/src/openpose_ros_node_3d.cpp file and change

  output_size.x = getParam(local_nh, "output_width", 1024);
  output_size.y = getParam(local_nh, "output_height", 1024);

to

  output_size.x = getParam(local_nh, "output_width", 640);
  output_size.y = getParam(local_nh, "output_height", 480);

Let me know if it works or not.

Just changed that, and unfortunately still nothing. I'm not sure if it is relevant but the single frame has a very strange hue, different from the normal RGB input? I had a look but I couldn't figure out what was causing it.

Sadly, I don't think the image hue should matter.

Hm, I'm running out of ideas I can give to you remotely. This is quite strange as another user, who also uses the Xtion Pro, which I am asuming is the asus camera you're using, managed to make the skeleton extractor work. The most common problem is just setting the resolution and the cloud and camera topics:

#5

This is a silly question, but did you rebuild the code after changing those files?

Also, I was expecting more ROS_INFO print outs such as

  ROS_INFO("num people: %d", num_people);
....
      ROS_INFO("        body part: %s", body_part_string.c_str());
      ROS_INFO("            (x, y, confidence): %i, %i, %f", bodypart_detection.x, bodypart_detection.y, bodypart_detection.confidence);
.....

  ROS_INFO("Detected %d persons", (int) res.detections.size());

It seems that your code actually gets stuck before line 210 (from your branch)

Perhaps placing some print statements and seeing where it fails will give us a better idea. If you can figure that out, we might be able to debug this more.

Also, how do you know that the 2D version is working? Is there a skeleton drawn on the image? Is the resolution what you expect?

In any case, this is strange because if the 2D version is working, then the neural net should be outputting the skeleton properly, and it should not get stuck before that line I just mentioned.

Nope, the 2D definitely works. I get a good skeleton on the image as well and it works at about 2.4 fps at the moment (over ssh). The code was definitely rebuilt.
After running some ROS_INFO checks, the code doesn't get past the line

frameDisplayer.displayFrame(outputImage, 0); //Alternative: cv::imshow(outputImage) + cv::waitKey(0)

I just realised that that line in particular will block the code, I went back and checked if it was commented or not in your code and it is. I re-commented it and it works!

Thank you for all your help!

@erolley-parnell
when i had attempted the roslaunch file and i also stoped at
I1106 16:42:42.020645 23480 net.cpp:744] Ignoring source layer weight_stage6_L2
I1106 16:42:42.020648 23480 net.cpp:744] Ignoring source layer loss_stage6_L2
[ INFO] [1509982962.037193215]: Initialization Successful!

Do you know why? thanks!

It's been a while since I've looked at the code, but @smarthhy, I believe it's waiting for an image from a ros topic. You need to setup your topic names properly from the launch files.