tensorlayer / HyperPose

Library for Fast and Flexible Human Pose Estimation

Home Page:https://hyperpose.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Paf.cpp, index out of range error

aaron-michaux opened this issue · comments

There seems to be an error with our indices are handled when parsing features. In particular, in the function get_humans

static std::vector<human_ref_t>
   get_humans(const std::vector<peak_info>& all_peaks,
                        const std::vector<std::vector<connection>>& all_connections)
{
   ...
         auto& hr1 = human_refs[hr_ids[0]];
         auto& hr2 = human_refs[hr_ids[1]]; // See note below
   ...
}

If you change the marked line above to:

auto& hr2 = human_refs.at(hr_ids[1]); // Index out of range!

So somewhere the hr_ids[...] are getting borked.

In particular, if you look at the line:

human_refs.erase(human_refs.begin() + hr_ids[1]);

I believe this corrupts the data structure if we're not erasing the last element of the vector. It seems to me that you need to add the following line below:

for(auto ind = hr_ids[1]; ind < int(human_refs.size()); ++ind)
      human_refs.at(ind).id = ind;

Since I'm not familiar with the algorithm, I'd like confirmation that my intuition is indeed correct.

This is a legitimate (crash) bug that's existed since version 1, yes when an element is deleted that's not the last element all the preexisting index references past that element need updating. I've fixed it in my own local repository and I kept forgetting to send a pull request.

Hi @korejan , could send us a PR about that if convinient?

@korejan Hi, are you available to send us a PR about this? Or you can indicate where you made such modification and I can adopt it manually.

@korejan Hi, are you available to send us a PR about this? Or you can indicate where you made such modification and I can adopt it manually.

@ganler sorry I've been extremely busy at work the past couple of months. The fix is very simple though the reason why I haven't submitted a PR is that there's actually two ways to fix it, one will be slightly faster for (very) large vectors but it makes the assumption that the human_refs list elements are stored in order, if i remember correctly when i was debugging this a couple of months ago i think this was the case but I don't know if this was intended or not so my local fix iterates through all the entries, my change does this in paf.cpp/get_humans:

2020-10-08 19_29_51-GitHub Desktop

@aaron-michaux Hi, could you provide an image sample that can reproduce this error? We are working on a fix on this and I think I need to test the fix.

Hi, I'll try to get something. Sorry, just noticed this. I've also found some other bugs, which I didn't bother to investigate. (Maybe I can send you problem images?)

Hi, I'll try to get something. Sorry, just noticed this. I've also found some other bugs, which I didn't bother to investigate. (Maybe I can send you problem images?)

Yes, please. Just report the errors and bugs and we will try our best to fix them..

How do I send you an image.

It's propriety -- no big deal if this particular image finds its way onto the interwebs, but if I send you other images, then they'll need to be kept private.

Please advise.

I tried many models but did not found any segmentation fault. Could you also indicate which model you are using?