This project evaluated some typical adversarial defense approaches in BCIs.
Following defenses were evaluated:
- Natural training:
NT.py
- Adversarial training:
AT.py
- TRADES:
TRADES.py
- HYDRA:
HYDRA.py
- Stochastic activation pruning:
stochastic_activation_pruning.py
- Input transformation:
input_transform.py
- Random self ensemble:
random_self_ensemble.py
- Self ensemble adversarial training:
self_ensemble_AT.py
Two white-box attacks and two black-box attacks with attack_lib.py
.
The file evaluation.py
can be used for evaluation after model trained with defense. For example, the evaluation of EEGNet trained with AT against
python3 evaluation.py --model EEGNet --defense AT --distance inf --target False