AidanKelley / foolbox

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

Home Page:https://foolbox.jonasrauber.de

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

https://readthedocs.org/projects/foolbox/badge/?version=latest

Foolbox Native: A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

Foolbox is a Python library that let's you easily run adversarial attacks against machine learning models like deep neural networks. It is built on top of EagerPy and works natively with models in PyTorch, TensorFlow, JAX, and NumPy.

πŸ”₯ Design

Foolbox 3 a.k.a. Foolbox Native has been rewritten from scratch using EagerPy instead of NumPy to achieve native performance on models developed in PyTorch, TensorFlow and JAX, all with one code base.

  • Native Performance: Foolbox 3 is built on top of EagerPy and runs natively in PyTorch, TensorFlow, JAX, and NumPyand comes with real batch support.
  • State-of-the-art attacks: Foolbox provides a large collection of state-of-the-art gradient-based and decision-based adversarial attacks.
  • Type Checking: Catch bugs before running your code thanks to extensive type annotations in Foolbox.

πŸ“– Documentation

  • Guide: The best place to get started with Foolbox is the official guide.
  • Tutorial: If you are looking for a tutorial, check out this Jupyter notebook.
  • Documentaiton: Finally, you can find the full API documentation on ReadTheDocs.

πŸš€ Quickstart

pip install foolbox

πŸŽ‰ Example

import foolbox as fb

model = ...
fmodel = fb.PyTorchModel(model)

attack = fb.attacks.LinfPGD()
epsilons = [0.0, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 1.0]
_, advs, success = attack(fmodel, images, labels, epsilons=epsilons)

More examples can be found in the examples folder, e.g. a full ResNet-18 example.

πŸ“„ Citation

If you use Foolbox for your work, please cite our paper using the this BibTex entry:

@inproceedings{rauber2017foolbox,
  title={Foolbox: A Python toolbox to benchmark the robustness of machine learning models},
  author={Rauber, Jonas and Brendel, Wieland and Bethge, Matthias},
  booktitle={Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning},
  year={2017},
  url={http://arxiv.org/abs/1707.04131},
}

🐍 Compatibility

We currently test with the following versions:

  • PyTorch 1.4.0
  • TensorFlow 2.1.0
  • JAX 0.1.57
  • NumPy 1.18.1

About

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

https://foolbox.jonasrauber.de


Languages

Language:Python 99.0%Language:Makefile 0.8%Language:JavaScript 0.2%