MyNameIsCosmo / lidar_body_tracking

ROS Catkin package to track people using octree and cluster extraction

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Does not work at all. After complete installation, all I got was settings tab, but no results.

aquibrash87 opened this issue · comments

I would love to give it a try, can you please suggest, if there is any point or step whcih you have not documented?

Hi @aquibrash87

I did not document adjusting the URDF and Launch files according to the sensor you are using:
https://github.com/MyNameIsCosmo/lidar_body_tracking/blob/master/urdf/m8.urdf.xacro
https://github.com/MyNameIsCosmo/lidar_body_tracking/blob/master/launch/lidar_body_tracking.launch

As for the lidar_body_tracking.launch file, adjust the scan_topic param to your 3d point cloud topic, and uncomment the rviz_filtered.launch include line.

For the URDF, take a look at the frameid of your 3D lidar point cloud output, and adjust the URDF link name accordingly.
Right now the link name is QP308, which was the frameid for the Quanergy lidar I used originally.

Please note that this code was more or less a one-off to help a Burning Man project, so the code quality reflects as such. It was meant to be adjustable and provide basic positions of clusters that were within the boundaries of a person. It does not do anything advanced other than filtering, octree for spatial change detection, and clustering.
It is a good resource for ROS 1 packages that utilize C++, PCL, and dynamic reconfigure.

image
Hi, can you help me which part of the code that I need to change for visualize the lidar data? because the lidar device which I use is not the same with yours. I'm using velodyne VLP-16.

The picture above is the result when I run your code.

You will have to change the URDF frame id to match the frame id of your velodyne points topic:
https://github.com/MyNameIsCosmo/lidar_body_tracking#notes

Specifically, you will need to replace "QP308" with your velodyne frame id here:
https://github.com/MyNameIsCosmo/lidar_body_tracking/blob/master/urdf/m8.urdf.xacro#L8,L10

Keep in mind, this code was written with a stationary LiDAR. No movement or tracking filters have been implemented. The code is rough, and you will have to use the dynamic reconfigure menu to find the sweet spot for your data.

If you are using the default launch commands, consider uncommenting the following lines:
https://github.com/MyNameIsCosmo/lidar_body_tracking/blob/master/launch/lidar_body_tracking.launch#L12-L13

Closing this issue as the original poster has not responded, and the comments in this thread should solve their issues.

Hi Cosmo. I would like to know if this works for real time tracking. I have a M8 lidar but I don't know where to put de lidar IP, or does it just work with changing the name "QP308" for my lidar's name?
Also how should I start the lidar?

Thanks!

@jfelipemojica This is OK for "real-time" tracking, but the update rate depends on your hardware. This software was written for a static, non-moving, lidar assembly that required some way to track movement around the lidar. Given the 10hz scan rate from the Quanergy M8, this program had 10hz tracking.

You should start the lidar with the ROS driver for that lidar, and check the Notes in the README.md for the urdf and topic changes.

If you need to find your frameid, start the ROS lidar driver up and do a
rostopic echo -n 1 /lidar/topic/points/Header/frame_id (I think)

Hi, Cosmo. I already able to run the program by change the frame ID like picture bellow.
image

But, based on the screenshot, the detection for human seems not working. I sit near to the LiDAR. Is there any that I missed? I already installed the people package for ROS.

Regards,
Welly

Hrpmh, the scale of your lidar scan seems small, and it is more dense than the LiDAR I used (8-layer Quanergy M8)

Play around with the values from Dynamic Reconfigure (or the cfg file), you might find detection by increasing the min/max cluster sizes and the cluster tolerance.

If you compare your picture to the picture in the README, you will see the scale and layer difference:

... of course, the gif above was taken in a fairly large, open area.

Hi Cosmo, would you like to explain more about these parameters? I'm not familiar with them.

  • cluster_tolerance. In the cfg file, it explained as "Cluster Tolerance in meters", is it the distance between one point cloud to other point cloud?"
  • leaf_size
  • resolution

regards,
Welly

Hi Cosmo, would you like to explain more about these parameters? I'm not familiar with them.

  • cluster_tolerance. In the cfg file, it explained as "Cluster Tolerance in meters", is it the distance between one point cloud to other point cloud?"
  • leaf_size
  • resolution

regards,
Welly

Hi,

I am interesting in it and want to perform it with device of velodyne,so can you tell me whether detect human successful with your device

thanks.

@liemwellys thought I responded to this, guess it slipped...

Cluster tolerance is the factors the minimum and maximum cluster size to determine it a cluster resembles a person. This isn't a great method for detection, but it was a quick implementation to get some other software written.

Leaf size and resolution relate to the octree and are aimed towards performance optimization.

@huyusheng123 the method used in this repository assumes clusters of points that move from a stationary lidar and are within a defined size are a person.
This works for small and quick proof of concepts where you are not dealing with varied environments and moving equipment.
It was written for a stationary LED tower that lit up based on the location and movement of people around the tower.