verlab / SceneUnderstanding_CIARP_2017

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

License

Project

This code is based on the paper A Robust Indoor Scene Recognition Method based on Sparse Representation presented on the 22nd Iberoamerican Congress on Pattern Recognition (CIARP 2017). The goal of this software is to build a robust representation of scene images, that conveys global as well as local information, for the task of scene recognition. We built an over-complete dictionary whose base vectors are feature vectors extracted from fragments of a scene, and the final representation of an image is a linear combination of the visual features of objects’ fragments.

For more information, please access the project page.

Contact

Authors

Institution

Federal University of Minas Gerais (UFMG)
Computer Science Department
Belo Horizonte - Minas Gerais -Brazil

Laboratory

VeRLab: Laboratory of Computer Vison and Robotics
http://www.verlab.dcc.ufmg.br

Program

Dependencies

Usage

  1. Generate Train/Test split:
    Example provided in folder CFG_FILES (fold4.cfg). The source code also contains the script kfold.py if you wish to generate new splits.

  2. Edit Config files:
    Examples provided in folder CFG_FILES. Config files should be stored in the same path as the on provided in --folder parameter (as seen later). The code requires two files:

  • IMNET.cfg: referring to VGG16 trained on ImageNet
  • PLACES.cfg: referring to VGG16 trained on Places205
  1. Execution:
    Execute run_test.py using the following parameters:
  • -f, --folder: Path to folder you wish to save the outputs of the code;
  • -o, --output: Path to file you wish to save the output statistics (e.g. accuracy);
  • -k, --fold: Index of Train/Test split (referring to the parameter [folds] in the Config files);
  • -m, --mode: Operation mode ('train' or 'test');
  • -d, --ns1: Size of dictionary for scale 1;
  • -e, --ns2: Size of dictionary for scale 2;
  • -l, --lambda: Sparsity (e.g. 0.1 to activate at most 10% of the dictionary);
  • -t, --method: Minimization Method ('OMP', 'SOMP' or 'LASSO');
  • -j, --dl: Sparsity controller for dictionary learning.

Example of Usage:

python run_test.py -f /root/output -o /root/output/result_ -k 4 -m train -d 603 -e 3283 -l 0.1 -t OMP -j 0.03 

Citation

If you are using it for academic purposes, please cite:

G. Nascimento, C. Laranjeira, V. Braz, A. Lacerda, E. R. Nascimento, A Robust Indoor Scene Recognition Method based on Sparse Representation, in: 22nd Iberoamerican Congress on Pattern Recognition, CIARP, Springer International Publishing, Valparaiso, CL, 2017. To appear.

Bibtex entry

@inproceedings{Nascimento2017,
Title = {A Robust Indoor Scene Recognition Method based on Sparse Representation},
Author = {Nascimento, Guilherme and Laranjeira, Camila and Braz, Vinicius and Lacerda, Anisio and Nascimento, Erickson Rangel},
booktitle = {22nd Iberoamerican Congress on Pattern Recognition. CIARP},
Publisher = {Springer International Publishing},
Year = {2017},
Address = {Valparaiso, CL},
note = {To appear},
}

Enjoy it.

About

License:GNU General Public License v3.0


Languages

Language:Python 100.0%