sheridp / Towards-a-Biologically-Plausible-Backprop

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Equilibrium Propagation

Original paper:

Equilibrium Propagation (EP) is an algorithm for computing error gradients that bridges the gap between the Backpropagation (BP) algorithm and the Contrastive Hebbian Learning (CHL) algorithm used to train energy-based models (such as Boltzmann Machines and Hopfield networks). EP is similar to CHL in that the learning rule to adjust the weights is local and Hebbian. EP is also similar to BP in that it involves the propagation of an error signal backwards in the layers of the network. These features make EP not only a model of interest for neuroscience, but also for the development of highly energy efficient learning-capable hardware (neuromorphic hardware).

Our recent NeurIPS (2019) paper:

This paper introduces a discrete-time formulation of EP with simplified notations, closer to those used in the deep learning litterature. It also establishes an equivalence of EP and BPTT in an RNN with static input and it introduces a convolutional RNN model trainable with EP.

Links to other papers:

The code of this repo is written in Theano, the Deep Learning framework which was developed by MILA.

Click here for a more recent Keras implementation.

Getting started

  • Download the code from GitHub:
git clone https://github.com/bscellier/Towards-a-Biologically-Plausible-Backprop
cd Towards-a-Biologically-Plausible-Backprop
  • To train a model (with 1 hidden layer by default), run the python script:
THEANO_FLAGS="floatX=float32, gcc.cxxflags='-march=core2'" python train_model.py
  • Once the model is trained, use the GUI by running the python script:
THEANO_FLAGS="floatX=float32, gcc.cxxflags='-march=core2'" python gui.py net1

About


Languages

Language:Python 100.0%