sbdzdz / mammoth

An Extendible (General) Continual Learning Framework based on Pytorch - official codebase of Dark Experience for General Continual Learning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

logo

Mammoth - An Extendible (General) Continual Learning Framework for Pytorch

Official repository of Class-Incremental Continual Learning into the eXtended DER-verse and Dark Experience for General Continual Learning: a Strong, Simple Baseline

NEW Join our Discord Server for all your Mammoth-related questions → Discord Shield

Sequential MNIST Sequential CIFAR-10 Sequential TinyImagenet Permuted MNIST Rotated MNIST MNIST-360

Setup

  • Use ./utils/main.py to run experiments.
  • Use argument --load_best_args to use the best hyperparameters from the paper.
  • New models can be added to the models/ folder.
  • New datasets can be added to the datasets/ folder.

Models

  • eXtended-DER (X-DER)
  • Dark Experience Replay (DER)
  • Dark Experience Replay++ (DER++)
  • Learning a Unified Classifier Incrementally via Rebalancing (LUCIR)
  • Greedy Sampler and Dumb Learner (GDumb)
  • Bias Correction (BiC)
  • Regular Polytope Classifier (RPC)
  • Gradient Episodic Memory (GEM)
  • A-GEM
  • A-GEM with Reservoir (A-GEM-R)
  • Experience Replay (ER)
  • Meta-Experience Replay (MER)
  • Function Distance Regularization (FDR)
  • Greedy gradient-based Sample Selection (GSS)
  • Hindsight Anchor Learning (HAL)
  • Incremental Classifier and Representation Learning (iCaRL)
  • online Elastic Weight Consolidation (oEWC)
  • Synaptic Intelligence (SI)
  • Learning without Forgetting (LwF)
  • Progressive Neural Networks (PNN)

Datasets

  • Sequential MNIST (Class-Il / Task-IL)
  • Sequential CIFAR-10 (Class-Il / Task-IL)
  • Sequential Tiny ImageNet (Class-Il / Task-IL)
  • Sequential CIFAR-100 (Class-Il / Task-IL)
  • Permuted MNIST (Domain-IL)
  • Rotated MNIST (Domain-IL)
  • MNIST-360 (General Continual Learning)

Citing these works

@article{boschini2022class,
  title={Class-Incremental Continual Learning into the eXtended DER-verse},
  author={Boschini, Matteo and Bonicelli, Lorenzo and Buzzega, Pietro and Porrello, Angelo and Calderara, Simone},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2022},
  publisher={IEEE}
}

@inproceedings{buzzega2020dark,
 author = {Buzzega, Pietro and Boschini, Matteo and Porrello, Angelo and Abati, Davide and Calderara, Simone},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
 pages = {15920--15930},
 publisher = {Curran Associates, Inc.},
 title = {Dark Experience for General Continual Learning: a Strong, Simple Baseline},
 volume = {33},
 year = {2020}
}

Awesome Papers using Mammoth

Our Papers

  • Dark Experience for General Continual Learning: a Strong, Simple Baseline (NeurIPS 2020) [paper]
  • Rethinking Experience Replay: a Bag of Tricks for Continual Learning (ICPR 2020) [paper] [code]
  • Class-Incremental Continual Learning into the eXtended DER-verse (TPAMI 2022) [paper]
  • Effects of Auxiliary Knowledge on Continual Learning (ICPR 2022) [paper]
  • Transfer without Forgetting (ECCV 2022) [paper][code]
  • Continual semi-supervised learning through contrastive interpolation consistency (PRL 2022) [paper][code]
  • On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning (NeurIPS 2022) [paper] [code]

Other Awesome CL works using Mammoth

  • New Insights on Reducing Abrupt Representation Change in Online Continual Learning (ICLR2022) [paper] [code]
  • Learning fast, learning slow: A general continual learning method based on complementary learning system (ICLR2022) [paper] [code]
  • Self-supervised models are continual learners (CVPR2022) [paper] [code]
  • Representational continuity for unsupervised continual learning (ICLR2022) [paper] [code]
  • Continual Learning by Modeling Intra-Class Variation (TMLR 2023) [paper] [code]
  • Consistency is the key to further Mitigating Catastrophic Forgetting in Continual Learning (CoLLAs2022) [paper] [code]
  • Continual Normalization: Rethinking Batch Normalization for Online Continual Learning (ICLR2022) [paper] [code]
  • NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks (ICML2022) [paper]
  • Learning from Students: Online Contrastive Distillation Network for General Continual Learning (IJCAI2022) [paper] [code]

Update Roadmap

In the near future, we plan to incorporate the following improvements into this master repository:

  • ER+Tricks (Rethinking Experience Replay: a Bag of Tricks for Continual Learning)
  • TwF & Pretraining Baselines (Transfer without Forgetting)
  • CCIC & CSSL Baselines (Continual semi-supervised learning through contrastive interpolation consistency)
  • LiDER (On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning)
  • Additional X-DER datasets (Class-Incremental Continual Learning into the eXtended DER-verse)

Pull requests welcome! Get in touch

Previous versions

If you're interested in a version of this repo that only includes the code for Dark Experience for General Continual Learning: a Strong, Simple Baseline, please use our neurips2020 tag.

About

An Extendible (General) Continual Learning Framework based on Pytorch - official codebase of Dark Experience for General Continual Learning

License:MIT License


Languages

Language:Python 100.0%