prprbr / awesome-lifelong-continual-learning

A list of papers, blogs, datasets and software in the field of lifelong/continual machine learning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

awesome-continual-learning / awesome-lifelong-learning

The objective of continual learning is to have machines replicate human-like learning of being able to sequentially learn new tasks and observations while still being able to retain the knowledge obtained from past experiences.

The following is a list of papers, blogs, datasets and software in the field of lifelong / continual / sequential / incremental machine learning.

Contents

Papers

Theory & Surveys

  • An empirical investigation of catastrophic forgetting in gradient-based neural networks. (2013) [paper]

Talks about the problem of forgetting in neural nets and advantage of using dropout

  • Catastrophic interference in connectionist networks: The sequential learning problem. (1989) [paper]

One of the earliest papers introducing the concept of forgetting in learning modules

  • Continual Lifelong Learning with Neural Networks: A Review (2018) [paper]

An exhaustive survey paper on different approaches for continual or lifelong learning

  • Making memories last: the synaptic tagging and capture hypothesis. (2011) [paper]

A neuroscientific perspective on synaptic learning

  • A massively parallel architecture for a self-organizing neural pattern recognition machine (1989) [paper]

Talks about the tradeoff between stability (ability to preserve past knowledge) and plasticity (ability to rapidly learn new stuffs)

  • Lifelong Machine Learning (2016)

A book on this topic [draft]

Approaches

Penalty applied in the learning process to restrict or consolidate those weights (EWC) that were important (by Fisher information matrix) for the older tasks to change

  • Less-forgetting learning in deep neural networks (2016) [paper]

Regularization based technique by discouraging the final hidden layer's neural representation to change much

Uses knowledge distillation based regularization by trying to enforce that the predictions of the new data using the old task's neural parameters do not change much while sequentially learning from the new data only

  • Gradient Episodic Memory for continual learning (2017) [paper] [Code]

explain. An efficient version has been recently proposed in this 2019 [paper]

  • iCaRL: Incremental Classifier and Representation Learning (2017) [paper] [Code]

Uses herding to select a representative exemplar subset in the process of sequentially learning new classes of data

  • Subset Replay based Continual Learning for Scalable Improvement of Autonomous Systems (2018) [paper]

Uses neural net's features to do a near online submodular subset selection of the previous examples and replays it during the newer learning sessions

  • Continual learning with deep generative replay (2017) [paper]

Without storing the whole or any of the previous training examples, it trains task specific GANs to generate and replay these older examples during the process of learning new tasks

Similar to generative replay but it uses an autoencoder instead of a GAN to replay the previously learned data

  • Continual Learning Through Synaptic Intelligence (2017) [paper] [Code]

Similar to EWC but instead of Fisher information, the importance scores of each weight is computer online along the learning trajectory

  • Memory Efficient Experience Replay for Streaming Learning (2018) [paper]

Explores stream clustering approaches to store subsets to be rehearsed later while learning new data or tasks

  • Measuring Catastrophic Forgetting in Neural Networks (2017) [paper]

New metrics proposed to measure a model's ability to retain past knowledge and acquiring new knowledge

  • Memory Replay GANs: learning to generate images from new categories without forgetting (2018) [paper] [code]

Another GAN based method to conditionally generate and replay previously learnt data in future learning sessions

  • Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference. (2019) [paper] [code]

Tries to combine reservoir sampling based experience replay with optimization based metalearning

  • Overcoming Catastrophic Forgetting for Continual Learning via Model Adaptation (2019) [paper]

In addition to a previous data generator, this paper also has a dynamic weights or parameter generator.

  • Progressive Neural Networks (2016) [paper]

multi-column approach where a new column is added with the advent of a new task and each layer takes input from the previous layer of itself as well as previous column

  • Progress & Compress A scalable continual learning approach (2018) [paper]

Two phases work in alternation. Progress phase is similar as above but Compress phase distills the knowledge into a knowledge base using EWC

  • Reinforced Continual learning [paper]

uses RL to adptively expands the neural network when a new task arrives

  • Do not Forget to Attend to Uncertainty while Mitigating Catastrophic Forgetting [paper] [code]

uses prediction uncertainty information and attention to improve continual learning

Datasets

  • Core50
  • Incremental CIFAR
  • Permutated MNIST

Industry/Startups

Blogs

  • Why Continual Learning is the key towards Machine Intelligence [Medium]
  • Enabling Continual Learning in Neural Networks, [Deepmind blog]
  • IBM’s Quest to Solve the Continual Learning Problem and Build Neural Networks Without Amnesia [link]
  • Continual AI website[link]

Workshops

About

A list of papers, blogs, datasets and software in the field of lifelong/continual machine learning