vislab-tecnico-lisboa / foveated_yolt

You Only Look Twice - Foveated version

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Foveated-YOLT

You Only Look Twice - Foveated version

The pre-trained models used to test our method are CaffeNet, AlexNet, GoogLeNet and VGGNet (16 weight layers).

Donwload files and from root, create a build directory (mkdir build). Execute from root

bash scripts/setup.sh to directly download the pre-trained models.

To compile from root:

cd build
cmake ..
make

To run yolt.cpp from root:

bash scripts/setup.sh

First, run the setup.sh script

bash scripts/setup.sh

Second, run the detector from the root:

bash scripts/run_detector.sh

To configure your network and its parameters, change the run_detector.sh file accordingly.

If you use our code, please cite our work:

@inproceedings{almeida2017deep,
  title={Deep Networks for Human Visual Attention: A hybrid model using foveal vision},
  author={Almeida, Ana Filipa and Figueiredo, Rui and Bernardino, Alexandre and Santos-Victor, Jos{\'e}},
  booktitle={Iberian Robotics conference},
  pages={117--128},
  year={2017},
  organization={Springer}
}

To test the python wrapper on an example image, change the filename='filename.jpg' , and run the following command from the root directory:

python src/python_bindings/test.py

About

You Only Look Twice - Foveated version


Languages

Language:MATLAB 72.4%Language:C++ 24.4%Language:CMake 2.0%Language:Shell 0.6%Language:Python 0.4%Language:Java 0.2%