Incremental-Learning's repositories
Awesome-Incremental-Learning
Awesome Incremental Learning
continual-learning
PyTorch implementation of various methods for continual learning (XdG, EWC, online EWC, SI, LwF, DGR, DGR+distill, RtF, iCaRL).
Knowledge-Distillation-Keras-1
An easy approach on how to implement Knowledge Distillation on Keras
CVPR19_Incremental_Learning
Learning a Unified Classifier Incrementally via Rebalancing
distillation
Keras + tensorflow experiments with knowledge distillation on EMNIST dataset
End-to-End-Incremental-Learning
Pytorch implementation of End-to-End Incremental Learning [2018 ECCV Castro]
EndToEndIncrementalLearning
End-to-End Incremental Learning
incremental-learning
Pytorch implementation of ACCV18 paper "Revisiting Distillation and Incremental Classifier Learning."
knowledge-distillation-keras
A machine learning experiment
structure_knowledge_distillation
The official code for the paper 'Structured Knowledge Distillation for Semantic Segmentation'. (CVPR 2019 ORAL) and extension to other tasks.
agem
Official implementation of the Averaged Gradient Episodic Memory (A-GEM) in Tensorflow
amis-editor-demo-vue
amis-editor-demo for vue
chatgpt-web
基于ChatGPT3.5 API实现的私有化web程序
EWC
TensorFlow implementation of Elastic Weight Consolidation
iccv2019-inc
ICCV 2019 Overcoming Catastrophic Forgetting with Unlabeled Data in the Wild
IL-SemSegm
Code for the paper "Incremental Learning Techniques for Semantic Segmentation", Michieli U. and Zanuttigh P., ICCVW, 2019
incremental_learning
Initial Code for the paper "incremental learning through deep adaptation"
keras-imprinting
论文Low-Shot Learning with Imprinted Weights 的keras 版简要实现;
knowledge-distillation-keras-mnist
It is a simple demo for using knowledge distillation in mnist dataset
MER
Fork of the GEM project (https://github.com/facebookresearch/GradientEpisodicMemory) including Meta-Experience Replay (MER) methods from the ICLR 2019 paper (https://openreview.net/pdf?id=B1gTShAct7)
overcoming-catastrophic
Implementation of "Overcoming catastrophic forgetting in neural networks" in Tensorflow
OWM
Code for Continual Learning of Context-dependent Processing in Neural Networks
piggyback
Code for Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights
SupportNet
SupportNet: solving catastrophic forgetting in class incremental learning with support data