vlomonaco / agem

Official implementation of the Averaged Gradient Episodic Memory (A-GEM) in Tensorflow

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Efficient Lifelong Learning with A-GEM

This is the official implementation of the Averaged Gradient Episodic Memory (A-GEM) in Tensorflow.

Requirements

TensorFlow >= v1.9.0.

Training

To replicate the results of the paper on a particular dataset, execute (see the Note below for downloading the CUB and AWA datasets):

$ ./replicate_results.sh <DATASET> <THREAD-ID> <JE>

Example runs are:

$ ./replicate_results.sh MNIST 3      /* Train PNN and A-GEM on MNIST */
$ ./replicate_results.sh CUB 1 1      /* Train JE models of RWALK and A-GEM on CUB */

Note

For CUB and AWA experiments, download the dataset prior to running the above script. Run following for downloading the datasets:

$ ./download_cub_awa.sh

The plotting code is provided under the folder plotting_code/. Update the paths in the plotting code accordingly.

When using this code, please cite our papers:

@inproceedings{AGEM,
  title={Efficient Lifelong Learning with A-GEM},
  author={Chaudhry, Arslan and Ranzato, Marc’Aurelio and Rohrbach, Marcus and Elhoseiny, Mohamed},
  booktitle={ICLR},
  year={2019}
}

@inproceedings{chaudhry2018riemannian,
  title={Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence},
  author={Chaudhry, Arslan and Dokania, Puneet K and Ajanthan, Thalaiyasingam and Torr, Philip HS},
  booktitle={ECCV},
  year={2018}
}

Questions/ Bugs

License

This source code is released under The MIT License found in the LICENSE file in the root directory of this source tree.

About

Official implementation of the Averaged Gradient Episodic Memory (A-GEM) in Tensorflow

License:MIT License


Languages

Language:Jupyter Notebook 64.2%Language:Python 35.3%Language:Shell 0.5%