manuelemacchia / incremental-learning-image-classification

Replication of existing baselines that address incremental learning issues and definition of new approaches to overcome existing limitations

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Incremental learning in image classification

Open In Colab

Manuele Macchia, Francesco Montagna, Giacomo Zema

Machine Learning and Deep Learning
Politecnico di Torino
A.Y. 2019/2020

Abstract. Extending the knowledge of a model is an open problem in deep learning. A central issue in incremental learning is catastrophic forgetting, resulting in degradation of previous knowledge when gradually learning new information. The scope of the described implementation is to reproduce some existing baselines that address the difficulties posed by incremental learning, to propose variations to the existing frameworks in order to gain a deeper knowledge of their components and in-depth insights, and finally to define new approaches to overcome existing limitations.

Open paper

Usage

To run an experiment, open one of the notebooks in the root directory of the repository and execute all cells related to the desired experiment. For example, to reproduce the iCaRL baseline experiment, run the iCaRL section of baselines.ipynb. The following section illustrates the directory structure and presents the contents of the notebooks.

Structure

Notebooks

  • baselines.ipynb includes baseline experiments such as fine-tuning, Learning without Forgetting and iCaRL.

  • studies_loss.ipynb contains experiments aimed at observing the behaviour of the network when replacing classification and distillation losses with different combinations.

  • studies_classifier.ipynb implements and evaluates different classifiers in place of iCaRL's standard nearest-mean-of-exemplars.

  • distillation_targets.ipynb implements an analysis of distillation targets and a variation to the iCaRL framework which produces a slight performance enhancement.

  • representation_drift.ipynb and representation_drift_tsne.ipynb consist in the second variation applied to the iCaRL framework, along with related visualizations.

More information regarding the baseline experiments, ablation studies and variations are available in the report.

Directories

  • data contains a class for handling CIFAR-100 in an incremental setting. Its main purpose is dividing the dataset in ten batches of ten classes each.

  • model contains classes that implement the baselines, i.e., fine-tuning, Learning without Forgetting and iCaRL, and the ResNet-32 backbone.

  • report contains the source code of the report and a pre-compiled PDF version.

  • results contains logs of (some of) the experiments that we carried out. We include a notebook that provides a framework for exploring the results. Logs are serialized and saved as pickle objects.

  • utils contains utility functions, mainly for producing plots and heatmaps.

References

[1] K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778, 2016.
[2] G. Hinton, O. Vinyals, and J. Dean. Distilling the Knowledge in a Neural Network, 2015.
[3] S. Hou, X. Pan, C. C. Loy, Z. Wang, and D. Lin. Learning a Unified Classifier Incrementally via Rebalancing. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 831–839, 2019.
[4] A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. 2009.
[5] Z. Li and D. Hoiem. Learning without Forgetting. European Conference on Computer Vision (ECCV), 2016.
[6] S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert. iCaRL: Incremental Classifier and Representation Learning. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5533–5542, 2017.

About

Replication of existing baselines that address incremental learning issues and definition of new approaches to overcome existing limitations


Languages

Language:Jupyter Notebook 72.1%Language:TeX 20.3%Language:Python 7.6%