DICE: Domain-attack Invariant Causal Learning for Improved Data Privacy Protection and Adversarial Robustness
Code for KDD 2022 Paper: "DICE: Domain-attack Invariant Causal Learning for Improved Data Privacy Protection and Adversarial Robustness" by Qibing Ren, Yiting Chen, Yichuan Mo, Qitian Wu and Junchi Yan.
06/25/2022 - Our code is released.
- python = 3.9.5
- torch = 1.10.1
main.py
is the main program includes loading dataset, training, and evaluation.train.py
specifies our three training modes:causal
,causal_poison
, andcausal_adv
, also implements the vanilla standard and adversarial training.attacks.py
is about our adversarial attack functions.model/
contains the necessary modules of our DICE model with the series of baseline backbones.poison/
is about adversarial poisoning generation and evaluation, modified based on adversarial poisons, which implements our poison attack with DICE.
To reproduce DICE in our paper for attacks and defense, we present the script examples on CIFAR-10 below.
python main.py --cfg scripts/cifar10/causal_poison.yaml --prefix your/exp/name
For adversarial poisoning generation, please refer to the directory poison/
.
python main.py --cfg scripts/cifar10/causal_attack.yaml --prefix your/exp/name
python main.py --cfg scripts/cifar10/causal_adv.yaml --prefix your/exp/name
For robustness evaluation, run the following script:
python main.py --cfg scripts/cifar10/eval.yaml --prefix your/exp/name
Note that in eval.yaml
, you need to specify the model path to the variable PRETRAINED_PATH
for loading model parameters. Your are welcome to try your own configurations. If you find a better yaml configuration, please let us know by raising an issue or a PR and we will update the benchmark!
DICE provides pretrained models of DICE for the three downstream tasks in paper. The model weights are available via google drive.
@inproceedings{ren2022dice,
title={DICE: Domain-attack Invariant Causal Learning for Improved Data Privacy Protection and Adversarial Robustness},
author={Qibing Ren, Yiting Chen, Yichuan Mo, Qitian Wu and Junchi Yan},
booktitle={KDD},
year={2022}
}
[1]: adversarial poisons: https://github.com/lhfowl/adversarial_poisons
[2]: Bag-of-Tricks-for-AT: https://github.com/P2333/Bag-of-Tricks-for-AT