praveen-palanisamy / multiple-object-tracking-lidar

C++ implementation to Detect, track and classify multiple objects using LIDAR scans or point cloud

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Node fails because pointcloud has nan points

anir90 opened this issue · comments

I tried to remap point cloud topic to the filtered_cloud and ran the node. But, I get this errror.

kf_tracker: /build/pcl-OilVEB/pcl-1.8.1+dfsg1/kdtree/include/pcl/kdtree/impl/kdtree_flann.hpp:172: int pcl::KdTreeFLANN<PointT, Dist>::radiusSearch(const PointT&, double, std::vector&, std::vector&, unsigned int) const [with PointT = pcl::PointXYZ; Dist = flann::L2_Simple]: Assertion `point_representation_->isValid (point) && "Invalid (NaN, Inf) point coordinates given to radiusSearch!"' failed.
Aborted (core dumped)

Upon delving a little deeper, I understand that this is because few of the points were having nan values and surprisingly the kdtree.hpp or your code does not have a condition to deal with that. Did you end up in this sort of an issue anytime? How did you solve it?

Hi @anir90,

The point clouds you publish to the "filtered_cloud" is not expected to contain NaNs. The code expects valid (not NaNs) point cloud data on that channel. The filtering is somewhat task and application dependent and therefore it is not subsumed by this module.
PCL library provides pcl::removeNaNFromPointCloud (...) method to filter out NaN points. You can refer to this example code snippet provided in this comment: #29 (comment)

Hope that helps

Hi Praveen,

Thank you so much for your quick reply. Yeah, I was able to get rid of that error by using a removeNan filter. I saw that an issue was already raised with the same problem only now, missed to check it earlier. Sorry about that. It might be useful to add to your README that the input pointcloud should not have nan points.

Though I solved the error, I'm not getting any output. Meaning none of the objects are being detected. I'm not able to identify why. I'm simulating the turtlebot world and having two objects in the robot's current view as shown below.

1

When I tried to run the kf_tracker node on top of this, the extraction process is taking a lot of time, about 2-5 minutes. THis is when the robot is stationary. And, even then the objects are not being detected. All the clusters are just being initialized. But, looking at your demo video, it looks like the system should perform accurately and a lot faster. I'm attaching a few console images. Can you help me identify why the objects are not being detected?

2
3
4

I'm not able to delve into this much and debug as I do not understand in depth the theory behind what you're doing. It'll also be useful if you can share some resources to understand the theory behind your work.

Hi @anir90 ,
Thank you for your suggestion. I have added a note in the README about the NaN points and the PCL filter.

Thank you for describing your setup in detail. Can you post a screenshot of RViz visualizing the point clouds that you are publishing to the filtered_pointcloud topic?
That will provide a view into what the inputs are and provide clues for debugging why the clustering and tracking isn't performing as expected.

Sure. Here's the rviz visualization of the input point cloud:
You can see the raw image at the bottom-left corner as well.

5

That's helpful!
I see two objects in this scan frame. I also saw two clusters reported in the output that you shared in the previous images which I assume are cluster_0 and cluster_1. Other than the absence of the RViz markers on the detected object clusters and their tracking ID (colors), the cluster detection output (2 clusters) look appropriate to me given this input point cloud. Are you expecting something different?

Are the objects moving? There seems to be no tracking of the objects thought. Also, you could enable the viz output channel to visualize the reported markers one for each unique object that is detected (and tracked).

Hey! Nope, the clusters are not being detected itself.

  1. With cluster_tolerance set to 0.3, euclidean clustering is taking almost 5 minutes. Was it this slow for you? If it is this slow, detecting objects while they are moving is not going to be possible.

  2. I get the following cluster when I tried to add all the clusters and visualize them for the first frame (when cluster_tolerance was set to 0.08). The white cloud depicts the cloud_clusters after euclidean extraction while the black cloud depicts the input cloud. I do not understand why it is somewhere outside the current scan frame.
    cluster

  3. And, when I visualize the markers in rviz, I can see only marker on the screen which is always present at the center. i.e on the robot footprint. Even in the outputs I attached above, you can see that all of the values are just initialized.
    22

  1. It is unusual for the clustering to take more than a few hundred milliseconds on typical point cloud data frame. What is the source of the point clouds you are using?

  2. I don't see any reported cluster in the recent figures you shared. Are you referring to the green blob? That doesn't look like a Marker output from this ROS node.

  3. That's not a marker generated by this (multi-object-tracking-lidar) node. That seems to be the origin marker.

Your RViz visualization looks quite different for typical LiDAR data. I am curious what the point cloud source is. Are they from some simulation?

Closing this since there's no response. Please re-open when you have a response and need a follow-up.