wang-jingyi / cleverhans

An adversarial example library for constructing attacks, building defenses, and benchmarking both

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CleverHans (latest release: v2.0.0)

cleverhans logo

Build Status

This repository contains the source code for CleverHans, a Python library to benchmark machine learning systems' vulnerability to adversarial examples. You can learn more about such vulnerabilities on the accompanying blog.

The CleverHans library is under continual development, always welcoming contributions of the latest attacks and defenses. In particular, we always welcome help towards resolving the issues currently open.

Setting up CleverHans

Dependencies

This library uses TensorFlow to accelerate graph computations performed by many machine learning models. Installing TensorFlow is therefore a pre-requisite.

You can find instructions here. For better performance, it is also recommended to install TensorFlow with GPU support (detailed instructions on how to do this are available in the TensorFlow installation documentation).

Installing TensorFlow will take care of all other dependencies like numpy and scipy.

Installation

Once dependencies have been taken care of, you can install CleverHans using pip or by cloning this Github repository.

pip installation

If you are installing CleverHans using pip, run the following command:

pip install -e git+https://github.com/tensorflow/cleverhans.git#egg=cleverhans

Manual installation

If you are installing CleverHans manually, you need to install TensorFlow first. Then, run the following command to clone the CleverHans repository into a folder of your choice:

git clone https://github.com/tensorflow/cleverhans

On UNIX machines, it is recommended to add your clone of this repository to the PYTHONPATH variable so as to be able to import cleverhans from any folder.

export PYTHONPATH="/path/to/cleverhans":$PYTHONPATH

You may want to make that change permanent through your shell's profile.

Currently supported setups

Although CleverHans is likely to work on many other machine configurations, we currently test it with Python {2.7, 3.5} and TensorFlow {1.0, 1.1} on Ubuntu 14.04.5 LTS (Trusty Tahr).

Tutorials

To help you get started with the functionalities provided by this library, the `cleverhans_tutorials/' folder comes with the following tutorials:

  • MNIST with FGSM (code): this tutorial covers how to train a MNIST model using TensorFlow, craft adversarial examples using the fast gradient sign method, and make the model more robust to adversarial examples using adversarial training.
  • MNIST with FGSM using Keras (code): this tutorial covers how to define a MNIST model with Keras and train it using TensorFlow, craft adversarial examples using the fast gradient sign method, and make the model more robust to adversarial examples using adversarial training.
  • MNIST with JSMA (code): this second tutorial covers how to define a MNIST model with Keras and train it using TensorFlow and craft adversarial examples using the Jacobian-based saliency map approach.
  • MNIST using a black-box attack (code): this tutorial implements the black-box attack described in this paper. The adversary train a substitute model: a copy that imitates the black-box model by observing the labels that the black-box model assigns to inputs chosen carefully by the adversary. The adversary then uses the substitute model’s gradients to find adversarial examples that are misclassified by the black-box model as well.

Some models used in the tutorials are defined using Keras, which should be installed before running these tutorials. Installation instructions for Keras can be found here. Note that you should configure Keras to use the TensorFlow backend. You can find instructions for setting the Keras backend on this page.

Examples

The examples/ folder contains additional scripts to showcase different uses of the CleverHans library or get you started competing in different adversarial example contests.

List of attacks

You can find a full list attacks along with their function signatures at cleverhans.readthedocs.io

Reporting benchmarks

When reporting benchmarks, please:

  • Use a versioned release of CleverHans. You can find a list of released versions here.
  • Either use the latest version, or, if comparing to an earlier publication, use the same version as the earlier publication.
  • Report which attack method was used.
  • Report any configuration variables used to determine the behavior of the attack.

For example, you might report "We benchmarked the robustness of our method to adversarial attack using v2.0.0 of CleverHans. On a test set modified by the FastGradientMethod with a max-norm eps of 0.3, we obtained a test set accuracy of 71.3%."

Contributing

Contributions are welcomed! To speed the code review process, we ask that:

Bug fixes can be initiated through Github pull requests.

Citing this work

If you use CleverHans for academic research, you are highly encouraged (though not required) to cite the following paper:

@article{papernot2017cleverhans,
  title={cleverhans v2.0.0: an adversarial machine learning library},
  author={Nicolas Papernot, Nicholas Carlini, Ian Goodfellow, Reuben Feinman,
  Fartash Faghri, Alexander Matyasko, Karen Hambardzumyan, Yi-Lin Juang, Alexey
  Kurakin, Ryan Sheatsley, Abhibhav Garg, Yen-Chen Lin},
  journal={arXiv preprint arXiv:1610.00768},
  year={2017}
}

About the name

The name CleverHans is a reference to a presentation by Bob Sturm titled “Clever Hans, Clever Algorithms: Are Your Machine Learnings Learning What You Think?" and the corresponding publication, "A Simple Method to Determine if a Music Information Retrieval System is a 'Horse'." Clever Hans was a horse that appeared to have learned to answer arithmetic questions, but had in fact only learned to read social cues that enabled him to give the correct answer. In controlled settings where he could not see people's faces or receive other feedback, he was unable to answer the same questions. The story of Clever Hans is a metaphor for machine learning systems that may achieve very high accuracy on a test set drawn from the same distribution as the training data, but that do not actually understand the underlying task and perform poorly on other inputs.

Authors

This library is managed and maintained by Ian Goodfellow (Google Brain), Nicolas Papernot (Pennsylvania State University), and Ryan Sheatsley (Pennsylvania State University).

The following authors contributed 100 lines or more (ordered according to the GitHub contributors page):

  • Nicolas Papernot (Pennsylvania State University, Google Brain intern)
  • Nicholas Carlini (UC Berkeley)
  • Ian Goodfellow (Google Brain)
  • Reuben Feinman (Symantec)
  • Fartash Faghri (University of Toronto, Google Brain intern)
  • Alexander Matyasko (Nanyang Technological University)
  • Karen Hambardzumyan (YerevaNN)
  • Yi-Lin Juang (NTUEE)
  • Alexey Kurakin (Google Brain)
  • Ryan Sheatsley (Pennsylvania State University)
  • Abhibhav Garg (IIT Delhi)
  • Yen-Chen Lin (National Tsing Hua University)
  • Paul Hendricks

Copyright

Copyright 2017 - Google Inc., OpenAI and Pennsylvania State University.

About

An adversarial example library for constructing attacks, building defenses, and benchmarking both

License:MIT License


Languages

Language:Python 100.0%