Graph-based-Knowledge-Distillation
DKD
Method | Title | Link | Time |
---|---|---|---|
IEP | Interpretable Embedding Procedure Knowledge Transfer via Stacked Principal Component Analysis and Graph Neural Network | Paper | 2021 AAAI |
HKD | Distilling Holistic Knowledge with Graph Neural Networks | Paper | 2021 ICCV |
CAG | Context-Aware Graph Inference with Knowledge Distillation for Visual Dialog | Paper | 2021 TPAMI |
DKWISL | Distilling Knowledge from Well-Informed Soft Labels for Neural Relation Extraction | Paper | 2020 AAAI |
KTG | Knowledge Transfer Graph for Deep Collaborative Learning | Paper | 2019 ACCV |
MHGD | Graph-based Knowledge Distillation by Multi-head Attention Network | Paper | 2019 BMVC |
IRG | Knowledge Distillation via Instance Relationship Graph | Paper | 2019 CVPR |
DGCN | Binarized Collaborative Filtering with Distilling Graph Convolutional Networks | Paper | 2019 IJCAI |
GKD | Deep Geometric Knowledge Distillation with Graphs | Paper | 2019 ICASSP |
SPG | Spatio-Temporal Graph for Video Captioning With Knowledge Distillation | Paper | 2020 CVPR |
MorsE | Meta-Knowledge Transfer for Inductive Knowledge Graph Embedding | Paper | 2022 SIGIR |
GCLN | Dark Reciprocal-Rank: Teacher-to-student Knowledge Transfer from Self-localization Model to Graph-convolutional Neural Network | Paper | 2021 ICRA |
DOD | Deep Structured Instance Graph for Distilling Object Detectors | Paper | 2021 ICCV |
BAF | Better and Faster: Knowledge Transfer from Multiple Self-supervised Learning Tasks via Graph Distillation for Video Classification | Paper | 2018 IJCAI |
LAD | Language Graph Distillation for Low-Resource Machine Translation | Paper | 2019 |
GD | Graph Distillation for Action Detection with Privileged Modalities | Paper | 2018 ECCV |
GCMT | Graph Consistency based Mean-Teaching for Unsupervised Domain Adaptive Person Re-Identification | Paper | 2021 IJCAI |
GraSSNet | Saliency Prediction with External Knowledge | Paper | 2021 WACV |
LSN | Learning student networks via feature embedding | Paper | 2020 TNNLS |
IntRA-KD | Inter-Region Affinity Distillation for Road Marking Segmentation | Paper | 2020 CVPR |
RKD | Relational knowledge distillation | Paper | 2019 CVPR |
CC | Correlation congruence for knowledge distillation | Paper | 2019 ICCV |
SPKD | Similarity-preserving knowledge distillation | Paper | 2019 ICCV |
KDExplainer | KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation | Paper | 2021 IJCAI |
TDD | Tree-like Decision Distillation | Paper | 2021 CVPR |
DualDE | DualDE: Dually Distilling Knowledge Graph Embedding for Faster and Cheaper Reasoning | Paper | 2022 WSDM |
KCAN | Conditional Graph Attention Networks for Distilling and Refining Knowledge Graphs in Recommendation | Paper | 2021 CIKM |
HKDIFM | Heterogeneous Knowledge Distillation using Information Flow Modeling | Paper | 2020 CVPR |
GKD
Method | Title | Link | Time |
---|---|---|---|
HIRE | HIRE: Distilling high-order relational knowledge from heterogeneous graph neural networks | Paper | 2022 Neurocomputing |
GFKD | Graph-Free Knowledge Distillation for Graph Neural Networks∗ | Paper | 2021 IJCAI |
LWC-KD | Graph Structure Aware Contrastive Knowledge Distillation for Incremental Learning in Recommender Systems | Paper | 2021 CIKM |
EGAD | EGAD: Evolving Graph Representation Learning with Self-Attention and Knowledge Distillation for Live Video Streaming Events | Paper | 2020 IEEE International Conference on Big Data |
GRL | Graph Representation Learning via Multi-task Knowledge Distillation | Paper | 2019 NeurIPS Workshop |
GFL | Graph Few-shot Learning via Knowledge Transfer | Paper | 2019 AAAI |
HGKT | Heterogeneous Graph-based Knowledge Transfer for Generalized Zero-shot Learning | Paper | 2019 ICPR |
AGNN | Amalgamating Knowledge from Heterogeneous Graph Neural Networks | Paper | 2021 CVPR |
CPF | Extract the Knowledge of Graph Neural Networks and Go Beyond it: An Effective Knowledge Distillation Framework | Paper | 2021 WWW |
LSP | Distilling Knowledge from Graph Convolutional Networks | Paper | 2020 CVPR |
GKD | GKD:Semi-supervised Graph Knowledge Distillation for Graph-Independent Inference | Paper | 2021 MICCAI |
scGCN | scGCN is a graph convolutional networks algorithm for knowledge transfer in single cell omics | Paper | 2021 Nature |
MetaHG | Distilling Meta Knowledge on Heterogeneous Graph for Illicit Drug Trafficker Detection on Social Media | Paper | 2021 NeurIPS |
Cold Brew | Cold Brew: Distilling Graph Node Representations with Incomplete or Missing Neighborhoods | Paper | 2021 ICLR |
PGD | Privileged Graph Distillation for Cold Start Recommendation | Paper | 2021 SIGIR |
GLNN | Graph-less Neural Networks: Teaching Old MLPs New Tricks via Distillation | Paper | 2022 ICLR |
Distill2Vec | Distill2Vec: Dynamic Graph Representation Learning with Knowledge Distillation | Paper | 2020 ASONAM |
MT -GCN | Mutual Teaching for Graph Convolutional Networks | Paper | 2021 FGCS |
RDD | Reliable Data Distillation on Graph Convolutional Network | Paper | 2020 SIGMOD |
TinyGNN | TinyGNN: Learning efficient graph neural networks | Paper | 2020 KDD |
GLocalKD | Deep Graph-level Anomaly Detection by Glocal Knowledge Distillation | Paper | 2022 WSDM |
OAD | Online Adversarial Distillation for Graph Neural Networks | Paper | 2021 |
SCR | SCR: Training Graph Neural Networks with Consistency Regularization | Paper | 2021 |
ROD | ROD: Reception-aware Online Distillation for Sparse Graphs | Paper | 2021 KDD |
EGNN | EGNN: Constructing explainable graph neural networks via knowledge distillation | Paper | 2022 KBS |
CKD | Collaborative Knowledge Distillation for Heterogeneous Information Network Embedding | Paper | 2022 WWW |
G-CRD | On Representation Knowledge Distillation for Graph Neural Networks | Paper | 2021 |
BGNN | Binary Graph Neural Networks | Paper | 2021 CVPR |
EGSC | Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation | Paper | 2021 NIPS |
HSKDM | A graph neural network-based node classification model on class-imbalanced graph data | Paper | 2022 KBS |
MustaD | Compressing deep graph convolution network with multi-staged knowledge distillation | Paper | 2021 PloS one |
SKD
Method | Title | Link | Time |
---|---|---|---|
LinkDist | Distilling Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages | Paper | 2021 |
IGSD | Iterative Graph Self-distillation | Paper | 2020 The Workshop on Self-Supervised Learning for the Web |
GNN-SD | On Self-Distilling Graph Neural Network | Paper | 2020 IJCAI |
SDSS | Multi-task Self-distillation for Graph-based Semi-Supervised Learning | Paper | 2021 |
SAIL | SAIL: Self-Augmented Graph Contrastive Learning | Paper | 2022 AAAI |
Citation
@article{liu2023graph,
title={Graph-based Knowledge Distillation: A survey and experimental evaluation},
author={Liu, Jing and Zheng, Tongya and Zhang, Guanzheng and Hao, Qinfen},
journal={arXiv preprint arXiv:2302.14643},
year={2023}
}