Parskatt / DKM

[CVPR 2023] DKM: Dense Kernelized Feature Matching for Geometry Estimation

Home Page:https://parskatt.github.io/DKM/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

With Synthetic Dataset Training Codes

TWJianNuo opened this issue · comments

Hi!

While trying to rerpduce the results using Mega + Synthetic Dataset using "train_mega_synthetic.py", I notice that in the training code, model is set to be DKM (version 1), in Line 31-33. Does this indicate the training scripts "train_mega_synthetic.py" is designed for DKM (v1). Or the scripts are suitable for both versions?

Thanks!

Hi again. I no longer recommend training with synthetic data. We mainly did this for comparison with PDCNet.
I will push an indoor version which is trained on megadepth + scannet, which we found to work the best.

Also, apologies for things not being consistent, the code is still in active development and the public version may change over time.

@TWJianNuo The training is now updated for DKMv2, including scannet training. For scannet you need to download the scannet dataset and indices (see https://github.com/zju3dv/LoFTR/blob/master/docs/TRAINING.md). Note that we only used every 10 frames, since the full scannet dataset is extremely large.

Ahh... I am confused. May I double-check the following facts:

For DKM:

  • DKMv2 outdoor: Trained using MegaDepth Only.
  • DKMv2 indoor: Trained using MegaDepth + Scannet.

While in the LoFTR paper, they:

  • LoFTR outdoor: Trained using MegaDepth Only.
  • LoFTR indoor: Trained using Scannet Only.

In the curret Arxiv paper:
Tab.1 is the results from DKM v1, trained on MegaDepth or MegaDepth + Synthetic?
Tab.4 is also the results from DKM v1 but only trained on MegaDepth Dataset

Thanks!

@TWJianNuo yes, the pre print contains our previous approach with synthetic data and is not up to date. We are continously improving the approach and will soon (in around a month) release an updated preprint. Im dorry if this complicates any reproducing from your part.

Not at all! I really appreciate your prompt reply and well-maintained codebase!

Thanks!