guanghelee / Randomized_Smoothing

"Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers" (NeurIPS 2019, previously called "A Stratified Approach to Robustness for Randomly Smoothed Classifiers")

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers:

This repository is for the paper

Outline

  • Please see each experiment in the corresponding directory (and the README therein).
  • The MNIST experiment has been released.
  • The ImageNet experiment has been released. (Not carefully checked. Please let me know if you find any problem.)
  • The pre-computed ρ-1r(0.5) and trained ResNet50 models have been released for the ImageNet experiment.
  • If you want to compute your own ρ-1r(0.5), please see the examples in the MNIST or ImageNet folder.
  • Please let me know (guanghe@mit.edu) if you need the codes for the decision tree experiment.

Citation:

If you find this repo useful for your research, please cite the paper

@inproceedings{lee2019tight,
  title={Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers},
  author={Guang-He Lee and Yang Yuan and Shiyu Chang and Tommi S. Jaakkola},
  booktitle={Advances in Neural Information Processing Systems},
  year={2019}
}

About

"Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers" (NeurIPS 2019, previously called "A Stratified Approach to Robustness for Randomly Smoothed Classifiers")


Languages

Language:Python 97.8%Language:Shell 2.2%