spalaciob / s2snets-reconstruction

Demo code for reconstructed images from fine-tuned autoencoders

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CVPR 2018: What do deep Networks like to See?

Implementation from the CVPR 2018 paper "What do Deep Networks Like to See?".

This is a simple proof of concept that uses a fine-tuned autoencoder on ResNet50 to reconstruct input images.

  1. Download the Torch model from here and store it with the root directory of the repo.
  2. Call
python plot_ae_reconstruction.py -i PATH

where PATH is the path to the input image. 3. An output image should be saved in the root directory of the repo with the reconstruction.

Example:

Original input and its reconstruction using an autoencoder fine-tuned on ResNet 50

For more info, please check out the paper's website.

UPDATES:

02.11.2020

  • Weights for the original SegNet (pre-trained on YFCC100m) are now available here and can be used by plot_ae_reconstruction.py. Make sure the path is correctly loaded by modifying the global variable RESNET_PATH.

About

Demo code for reconstructed images from fine-tuned autoencoders

License:MIT License


Languages

Language:Python 100.0%