oravus / seq2single

Visual place recognition from opposing viewpoints under extreme appearance variations

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Look No Deeper: Recognizing Places from Opposing Viewpoints under Varying Scene Appearance using Single-View Depth Estimation

This is the source code for the paper titled: "Look No Deeper: Recognizing Places from Opposing Viewpoints under Varying Scene Appearance using Single-View Depth Estimation", [arXiv][IEEE Xplore].

If you find this work useful, please cite it as: Garg, S., Babu V, M., Dharmasiri, T., Hausler, S., Suenderhauf, N., Kumar, S., Drummond, T., & Milford, M. (2019). Look no deeper: Recognizing places from opposing viewpoints under varying scene appearance using single-view depth estimation. In IEEE International Conference on Robotics and Automation (ICRA), 2019. IEEE.

bibtex:

@inproceedings{garg2019look,
title={Look No Deeper: Recognizing Places from Opposing Viewpoints under Varying Scene Appearance using Single-View Depth Estimation},
author={Garg, Sourav and Babu V, Madhu and Dharmasiri, Thanuja and Hausler, Stephen and Suenderhauf, Niko and Kumar, Swagat and Drummond, Tom and Milford, Michael},
booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
year={2019}
}

Illustration of the proposed approach

An image depicting topometric representation.

Requirements

  • Ubuntu (Tested on 16.04)
  • Jupyter (Tested on 4.4.0)
  • Python (Tested on 3.5.6)
    • numpy (Tested on 1.15.2)
    • scipy (Tested on 1.1.0)

Optionally, for vis_results.ipynb:

  • Matplotlib (Tested on 2.0.2)

Download an example dataset and its pre-computed representations

  1. In seq2single/precomputed/, download pre-computed representations (~10 GB). Please refer to the seq2single/precomputed/readme.md for instructions on how to compute these representations.

  2. [Optional] In seq2single/images/, download images (~1 GB). These images are a subset of two different traverses from the Oxford Robotcar dataset.

(Note: These download links from Mega.nz require you to first create an account (free))

Run

  1. The Jupyter notebook seq2single.ipynb first loads the pre-computed global image descriptors to find top matches. These matches are re-ranked with the proposed method using the pre-computed depth masks and dense conv5 features.

License

The code is released under MIT License.

Related Projects

Delta Descriptors (2020)

CoarseHash (2020)

LoST (2018)

About

Visual place recognition from opposing viewpoints under extreme appearance variations

License:MIT License


Languages

Language:Jupyter Notebook 100.0%