uzh-rpg / vilib

CUDA Visual Library by RPG

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Does the starting trace of a point start with an integer?

ThinkPig826 opened this issue · comments

I add the code.
std::cout<<"first_pos_: "<<track.first_pos_[0]<<", "<<track.first_pos_[1]<<std::endl;
std::cout<<"cur_pos_ : "<<track.cur_pos_[0]<<", "<<track.cur_pos_[1]<<std::endl;
Get the following results.
first_pos_: 698, 36
cur_pos_ : 694.986, 33.3397
first_pos_: 708, 294
cur_pos_ : 705.637, 278.984
first_pos_: 640, 16
cur_pos_ : 636.937, 12.2501
first_pos_: 378, 48
cur_pos_ : 374.201, 37.2297
first_pos_: 700, 24
cur_pos_ : 697.136, 21.1231
...

Does the starting trace of a point start with an integer?

Thank you !

Hello ThinkPig826,
thanks for your question.
The starting position of each feature track depends on the detector used in the feature tracker. If the detector outputs integer coordinates as feature locations, then the 1st positions of the feature tracks are going to be integers.
This is the case with the FAST feature detector, as it does not perform sub-pixel refinement after non-maximum suppression. The lower pyramid-level feature positions receive a 2^(l-1) scaling, which is again an integer, therefore the final feature positions will be integers even with multiple pyramid levels.
Once the features were detected, their position is going to be adjusted by LK, and therefore they become non-integer values.

thank you very much!
Can I replace the points obtained by the detector with my own points? There is only one point in a grid.

I want to use vilib library in vins.

Sure, you can.
The detector is triggered here:

for(std::size_t c=0;c<cur_frames->size();++c) {

  • it is called whenever the tracked feature count drops below a treshold, or the tracker is instructed to maintain a constant number of features for every single frame.

You should just simply remove our code here that adds the features from the detector to the tracker.
And the way to populate the points with your points is:

std::shared_ptr<Frame> frame ; // the frame where the point can be observed (so that a template patch can be extracted)
const float x; // x coordinate of the point to be tracked
const float y; // y coordiante of the point to be tracked
const int level; // the actual pyramid level where the point was extracted from, e.g. if you only use a single resolution, then 0
const float score; // the score with which the point was detected, may be arbitrary
const std::size_t camera_id; // in single camera tracking, this is 0
// Add the point to the tracker
int track_index = addTrack(frame,
                                 x,
                                 y,
                                 level,
                                 score,
                                 camera_id);
// Now we add the point to the output frame, so that it can be displayed/or used outside the tracker
addFeature(cur_base_frames[c],track_index,camera_id);
++detected_features_num_[camera_id];

Also make sure, whenever you add the new points, the updateTracks() function is triggered so that the template patches are precalculated for the incoming frames. (

// 04) Precompute patches & Hessians
)

Thank you very much.
I did that, but found that the tracking effect is not as good as OpenCV. I am not sure if it is my problem.