qilong-zhang / robustness

A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

robustness package

Install via pip: pip install robustness

Read the docs: https://robustness.readthedocs.io/en/latest/index.html

robustness is a package we (students in the MadryLab) created to make training, evaluating, and exploring neural networks flexible and easy. We use it in almost all of our projects (whether they involve adversarial training or not!) and it will be a dependency in many of our upcoming code releases. A few projects using the library include:

We demonstrate how to use the library in a set of walkthroughs and our API reference. Functionality provided by the library includes:

  • Performing input manipulation using robust (or standard) models---this includes making adversarial examples, inverting representations, feature visualization, etc. The library offers a variety of optimization options (e.g. choice between real/estimated gradients, Fourier/pixel basis, custom loss functions etc.), and is easily extendable.
  • Importing robustness as a package, which allows for easy training of neural networks with support for custom loss functions, logging, data loading, and more! A good introduction can be found in our two-part walkthrough (Part 1, Part 2).

Note: robustness requires PyTorch to be installed with CUDA support.

Pretrained models

Along with the training code, we release a number of pretrained models for different datasets, norms and ε-train values. This list will be updated as we release more or improved models. Please cite this library (see bibtex entry below) if you use these models in your research.

For each (model, ε-test) combination we evaluate 20-step and 100-step PGD with a step size of 2.5 * ε-test / num_steps. Since these two accuracies are quite close to each other, we do not consider more steps of PGD. For each value of ε-test, we highlight the best robust accuracy achieved over different ε-train in bold.

Note #1: We did not perform any hyperparameter tuning and simply used the same hyperparameters as standard training. It is likely that exploring different training hyperparameters will increasse these robust accuracies by a few percent points.

Note #2: The pytorch checkpoint (.pt) files below were saved with the following versions of PyTorch and Dill:

torch==1.1.0
dill==0.2.9

CIFAR10 L2-norm (ResNet50):

+--------------+----------------+-----------------+---------------------+---------------------+ | CIFAR10 L2-robust accuracy | +--------------+----------------+-----------------+---------------------+---------------------+ | | ε-train | +--------------+----------------+-----------------+---------------------+---------------------+ | ε-test | 0.0 | 0.25 | 0.5 | 1.0 | +==============+================+=================+=====================+=====================+ | 0.0 | 95.25% / - | 92.77% / - | 90.83% / - | 81.62% / - | +--------------+----------------+-----------------+---------------------+---------------------+ | 0.25 | 8.66% / 7.34% | 81.21% / 81.19% | 82.34% / 82.31% | 75.53% / 75.53% | +--------------+----------------+-----------------+---------------------+---------------------+ | 0.5 | 0.28% / 0.14% | 62.30% / 62.13% | 70.17% / 70.11% | 68.63% / 68.61% | +--------------+----------------+-----------------+---------------------+---------------------+ | 1.0 | 0.00% / 0.00% | 21.18% / 20.66% | 40.47% / 40.22% | 52.72% / 52.61% | +--------------+----------------+-----------------+---------------------+---------------------+ | 2.0 | 0.00% / 0.00% | 0.58% / 0.46% | 5.23% / 4.97% | 18.59% / 18.05% | +--------------+----------------+-----------------+---------------------+---------------------+

CIFAR10 Linf-norm (ResNet50):

+--------------+-----------------+---------------------+ | CIFAR10 Linf-robust accuracy | +--------------+-----------------+---------------------+ | | ε-train | +--------------+-----------------+---------------------+ | ε-test | 0 / 255 | 8 / 255 | +==============+=================+=====================+ | 0 / 255 | 95.25% / - | 87.03% / - | +--------------+-----------------+---------------------+ | 8 / 255 | 0.00% / 0.00% | 53.49% / 53.29% | +--------------+-----------------+---------------------+ | 16 / 255 | 0.00% / 0.00% | 18.13% / 17.62% | +--------------+-----------------+---------------------+

ImageNet L2-norm (ResNet50):

  • ε = 0.0 (PyTorch pre-trained)
  • ε = 3.0

+--------------+-----------------+---------------------+ | ImageNet L2-robust accuracy | +--------------+-----------------+---------------------+ | | ε-train | +--------------+-----------------+---------------------+ | ε-test | 0.0 | 3.0 | +==============+=================+=====================+ | 0.0 | 76.13% / - | 57.90% / - | +--------------+-----------------+---------------------+ | 0.5 | 3.35% / 2.98% | 54.42% / 54.42% | +--------------+-----------------+---------------------+ | 1.0 | 0.44% / 0.37% | 50.67% / 50.67% | +--------------+-----------------+---------------------+ | 2.0 | 0.16% / 0.14% | 43.04% / 43.02% | +--------------+-----------------+---------------------+ | 3.0 | 0.13% / 0.12% | 35.16% / 35.09% | +--------------+-----------------+---------------------+

ImageNet Linf-norm (ResNet50):

+--------------+-----------------+---------------------+---------------------+ | ImageNet Linf-robust accuracy | +--------------+-----------------+---------------------+---------------------+ | | ε-train | +--------------+-----------------+---------------------+---------------------+ | ε-test | 0.0 | 4 / 255 | 8 / 255 | +==============+=================+=====================+=====================+ | 0 / 255 | 76.13% / - | 62.42% / - | 47.91% / - | +--------------+-----------------+---------------------+---------------------+ | 4 / 255 | 0.04% / 0.03% | 33.58% / 33.38% | 33.06% / 33.03% | +--------------+-----------------+---------------------+---------------------+ | 8 / 255 | 0.01% / 0.01% | 13.13% / 12.73% | 19.63% / 19.52% | +--------------+-----------------+---------------------+---------------------+ | 16 / 255 | 0.01% / 0.01% | 1.53% / 1.37% | 5.00% / 4.82% | +--------------+-----------------+---------------------+---------------------+

Citation

If you use this library in your research, cite it as follows:

(Have you used the package and found it useful? Let us know!).

Maintainers

Contributors/Commiters

About

A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.

License:MIT License


Languages

Language:Jupyter Notebook 93.7%Language:Python 6.3%