A curated list of awesome papers on dataset distillation and related applications, inspired by awesome-computer-vision.
Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.). This task was first introduced in the 2018 paper Dataset Distillation [Tongzhou Wang et al., '18], along with a proposed algorithm using backpropagation through optimization steps.
In recent years (2019-now), dataset distillation has gained increasing attention in the research community, across many institutes and labs. More papers are now being published each year. These wonderful researches have been constantly improving dataset distillation and exploring its various variants and applications.
This project is curated and maintained by Guang Li, Bo Zhao, and Tongzhou Wang.
- π Project Page
Code
- π
bibtex
- Dataset Distillation (Tongzhou Wang et al., 2018) π
π
- Gradient-Based Hyperparameter Optimization Through Reversible Learning (Dougal Maclaurin et al., ICML 2015)
π
- Dataset Condensation with Gradient Matching (Bo Zhao et al., ICLR 2021)
π
- Dataset Condensation with Differentiable Siamese Augmentation (Bo Zhao et al., ICML 2021)
π
- Dataset Distillation by Matching Training Trajectories (George Cazenavette et al., CVPR 2022) π
π
- Dataset Condensation with Contrastive Signals (Saehyung Lee et al., ICML 2022)
π
- Delving into Effective Gradient Matching for Dataset Condensation (Zixuan Jiang et al., 2022) π
- Dataset Condensation with Distribution Matching (Bo Zhao et al., 2021)
π
- CAFE: Learning to Condense Dataset by Aligning Features (Kai Wang et al., CVPR 2022)
π
- Dataset Meta-Learning from Kernel Ridge-Regression (Timothy Nguyen et al., ICLR 2021)
π
- Dataset Distillation with Infinitely Wide Convolutional Networks (Timothy Nguyen et al., NeurIPS 2021)
π
- Dataset Distillation using Neural Feature Regression (Yongchao Zhou et al., 2022) π
- Synthesizing Informative Training Samples with GAN (Bo Zhao et al., 2022)
π
- Dataset Condensation via Efficient Synthetic-Data Parameterization (Jang-Hyun Kim et al., ICML 2022)
π
- Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks (Zhiwei Deng et al., 2022) π
- Dataset Condensation with Latent Space Knowledge Factorization and Sharing (Hae Beom Lee et al., 2022) π
- Flexible Dataset Distillation: Learn Labels Instead of Images (Ondrej Bohdal et al., NeurIPS 2020 Workshop)
π
- Soft-Label Dataset Distillation and Text Dataset Distillation (Ilia Sucholutsky et al., IJCNN 2021)
π
- DC-BENCH: Dataset Condensation Benchmark (Justin Cui et al., 2022)
π
- SecDD: Efficient and Secure Method for Remotely Training Neural Networks (Ilia Sucholutsky et al., AAAI 2021 Student Abstract) π
- Privacy for Free: How does Dataset Condensation Help Privacy? (Tian Dong et al., ICML 2022) π
- Can We Achieve Robustness from Data Alone? (Nikolaos Tsilivis et al., 2022) π
- Federated Learning via Synthetic Data (Jack Goetz et al., 2020) π
- Distilled One-Shot Federated Learning (Yanlin Zhou et al., 2020) π
- FedSynth: Gradient Compression via Synthetic Data in Federated Learning (Shengyuan Hu et al., 2022) π
- FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning (Yuanhao Xiong et al., 2022) π
- Federated Learning via Decentralized Dataset Distillation in Resource-Constrained Edge Environments (Rui Song et al., 2022) π
- Reducing Catastrophic Forgetting with Learning on Synthetic Data (Wojciech Masarczyk et al., CVPR 2020 Workshop) π
- Condensed Composite Memory Continual Learning (Felix Wiewel et al., IJCNN 2021)
π
- Distilled Replay: Overcoming Forgetting through Synthetic Samples (Andrea Rosasco et al., 2021)
π
- Sample Condensation in Online Continual Learning (Mattia Sangermano et al., IJCNN 2022) π
- PRANC: Pseudo RAndom Networks for Compacting deep models (Parsa Nooralinejad et al., 2022)
π
- Graph Condensation for Graph Neural Networks (Wei Jin et al., ICLR 2022)
π
- Condensing Graphs via One-Step Gradient Matching (Wei Jin et al., KDD 2022)
π
- Graph Condensation via Receptive Field Distribution Matching (Mengyang Liu et al., 2022) π
- Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data (Felipe Petroski Such et al., ICML 2020)
π
- Knowledge Condensation Distillation (Chenxin Li et al., ECCV 2022)
π
- Data Distillation for Text Classification (Yongqi Li et al., 2021) π
- Soft-Label Anonymous Gastric X-ray Image Distillation (Guang Li et al., ICIP 2020)
π
- Compressed Gastric Image Generation Based on Soft-Label Dataset Distillation for Medical Data Sharing (Guang Li et al., CMPB 2022) π
- Wearable ImageNet: Synthesizing Tileable Textures via Dataset Distillation (George Cazenavette et al., CVPR 2022 Workshop) π
π