ArlanCooper / Awesome_KnowlegeDistillation

Awesome Resources of Knowledge Distillation. 从零开始知识蒸馏。

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Awesome_KnowledgeDistillation

这里收集了一些关于知识蒸馏 - Knowledge Distillation (KD) 的介绍和研究现状。

如果你找到了相关领域 remarkable (开山之作、详尽survey、高引用量) 的 paper,可以在 issue 中留言。

如果对你有帮助,请三连支持👍!

DRL 基础

  • 大神总结 | 强化学习线路 [post]
  • Policy Gradient Algorithms [post]
  • Deterministic Policy Gradient Algorithms [paper]
  • Continuous Control with Deep Reinforcement Learning [paper]

KD 概述

  • Knowledge Distillation(知识蒸馏)Review--20篇paper回顾 [post]
  • 知识蒸馏 | 模型压缩利器_良心总结 [post]
  • Knowledge Distillation: A Survey [paper]
  • Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks [paper]

KD 方法

Logits(Response)-Based

  • Distilling the Knowledge in a Neural Network [paper]
  • Deep Mutual Learning [paper]
  • On the Efficacy of Knowledge Distillation [paper]
  • Self-training with Noisy Student improves ImageNet classification [paper]
  • Training deep neural networks in generations: A more tolerant teacher educates better students [paper]
  • Distillation-Based Training for Multi-Exit Architectures [paper]
  • Knowledge Extraction with No Observable Data [paper] [code]

Feature-Based

Relation-Based

Online Distillation

Self-Distilllation

Adversarial KD

  • MEAL: Multi-Model Ensemble via Adversarial Learning [paper] [code]
  • Feature-map-level Online Adversarial Knowledge Distillation [paper]
  • Data-Free Learning of Student Networks [paper]
  • KDGAN: Knowledge Distillation with Generative Adversarial Networks [paper]
  • Training Shallow and Thin Networks for Acceleration via Knowledge Distillation with Conditional Adversarial Networks [paper]

Multi-Teacher KD

Cross-Modal KD

Graph-Based KD

Attention-Based KD

Data-Free KD

Quantized KD

LifeLong KD

NAS-based KD

KD 应用

In Reinforcement Learning

  • Policy Distillation [paper]
  • Distilling Policy Distillation [paper]
  • PoPS: Policy Pruning and Shrinking for Deep Reinforcement Learning [paper] [code]
  • Distillation Strategies for Proximal Policy Optimization [paper]

相关仓库

  • dkozlov/awesome-knowledge-distillation [awesome]
  • danielmcpark/awesome-knowledge-distillation [awesome]
  • FLHonker/Awesome-Knowledge-Distillation [awesome]
  • peterliht/knowledge-distillation-pytorch [code]
  • ikostrikov/pytorch-a2c-ppo-acktr-gail [code]

About

Awesome Resources of Knowledge Distillation. 从零开始知识蒸馏。