ajabri / videowalk

Repository for "Space-Time Correspondence as a Contrastive Random Walk" (NeurIPS 2020)

Home Page:http://ajabri.github.io/videowalk

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Landmark propagation

b4shy opened this issue · comments

Hi!
Thanks for sharing your code, it worked extremely well for my segmentation task!
I suppose that landmark detection should also work very well with your method, did you try something like this?
It should be very close to the pose estimation task
Thanks!

Hi @b4shy,

Thanks for your interest! I'm glad it worked in your setting! What kind of segmentation task was it, out of curiousity?

Most propagation applications (e.g. segment or keypoint propagation) are instances of label propagation. The code provided in test.py implements generic feature propagation through time -- these features can be labels (i.e. categorical distribution for each spatial position of the first frame). The only assumption is that the map of features to be propagated is provided for the first frame; for label propagation, this map should have shape K x H x W, where K is the number of classes.

One can implement landmark propagation in the same way as keypoint propagation, with K landmark classes and a background class. One can implement texture propagation by assigning colors to labels, and rendering the propagated label maps afterwards. To do this with test.py, you essentially need to prepare a data loader that gives a stack of frames, and the label map for the first frame

Finally, you can view our training task as spatial instance propagation -- the initial label map has H*W classes, one for each index in the map.

To do detection, one needs to train an output head on top of the learned representation. I have not explored this much.