naver / r2d2

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Codes for making datasets

ChunhuanLin opened this issue · comments

Hi, thanks for your great work. The datasets you made for training is very crucial in your work. Will you release the codes which you used to make the datasets and release more details about it? Thank you.

Hi @ChunhuanLin

So far, we used 3 types of datasets for training:

  • image pairs based on single images + random transformation:
    The code is already available (see README --> auto_pairs(web_images) and auto_pairs(aachen_db_images)).
  • image pairs based on single images + style-transfer:
    We are not planning to release this code, as it depends on 3rd-party libraries, but this is quite straightforward to implement (check the paper for more details)
  • image pairs based on optical flow.
    We are not planning to release the code to create the datasets based on optical flow. The pipeline, described in the paper, is rather simple:
    1. collect pairs of matching images (typically, as a by-product of SfM)
    2. match pairs using deepmatching
    3. Optionally, clean up the matches that do not satisfy the epipolar constraints given by the SfM (recomputed using openCV's findFundamentalMat() from SfM 2D point matches for more accuracy)
    4. compute the mask as region where density of deepmatching's matches is low
    5. compute optical flow using epicflow

Hope that it helps!

@jerome-revaud
Thanks for your amazing work.
Could you please explain “compute the mask as region where density of deepmatching's matches is low” this step? Which method did you use to get the mask? @yocabon @humenbergerm @ChunhuanLin