nwatters01 / stacked-autoencoder

Stacked autoencoder model from DiCarlo lab rotation.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Stacked autoencoder codebase.

This codebase contains a stacked autoencoder model. The main entrypoint is run.py. See the documentation at the top of run.py for instructions on how to train a model.

Configs specifying models are in the configs directory.

If you have questions about this implementation, please contact: Nick Watters, nwatters@mit.edu

About

Stacked autoencoder model from DiCarlo lab rotation.

License:MIT License


Languages

Language:Python 100.0%