vbalnt / UBC-Phototour-Patches-Torch

Code to convert the phototour patches dataset from Brown et al. learning descriptors paper to torch-lua

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Phototour patches dataset in torch .t7 format for experiments with convolutional feature descriptors

Intro

Simple script to convert the Phototour patches dataset into torch format for experiments with convolutional neural networks.

Also check pn-net.

Two versions of the patches are saved both 32x32 and 64x64. Also, the ids and the 500k and 100K patch pairs are saved. Usually the 500k dataset is reserved for training, and all the results in the bibliography together with the FPR95 error rates are reported in the 100k datasets.

How to run

Download the raw data from patches-dataset and unzip. Change the parameters in the opt table to match your config (e.g path and img filetype), and run the file with th PhototourPatches.lua

Download generated .t7 files

If you dont want to convert them yourself, you can download the already converted models:

wget http://icvl.ee.ic.ac.uk/vbalnt/notredame-t7.tar.gz
wget http://icvl.ee.ic.ac.uk/vbalnt/liberty-t7.tar.gz
wget http://icvl.ee.ic.ac.uk/vbalnt/yosemite-t7.tar.gz

Note that the above are the DoG versions of the patches, so to get the Harris versions, you need to rerun the code.

About

Code to convert the phototour patches dataset from Brown et al. learning descriptors paper to torch-lua


Languages

Language:Lua 100.0%