COD1995 / adversarial

Creating and defending against adversarial examples

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

adversarial

This repository contains PyTorch code to create and defend against adversarial attacks.

See this Medium article for a discussion on how to use and defend against the projected gradient attack.

Example adversarial attack created using this repo.

PGD Attack

Cool fact - adversarially trained discriminative (not generative!) models can be used to interpolate between classes by creating large-epsilon adversarial examples against them.

MNIST Class Interpolation

Contents

  • A Jupyter notebook demonstrating how to use and defend against the projected gradient attack (see notebooks/)

  • adversarial.functional contains functional style implementations of a view different types of adversarial attacks

    • Fast Gradient Sign Method - white box - batch implementation
    • Projected Gradient Descent - white box - batch implementation
    • Local-search attack - black box, score-based - single image
    • Boundary attack - black box, decision-based - single imagae

Setup

Requirements

Listed in requirements.txt. Install with pip install -r requirements.txt preferably in a virtualenv.

Tests (optional)

Run pytest in the root directory to run all tests.

About

Creating and defending against adversarial examples


Languages

Language:Jupyter Notebook 96.1%Language:Python 3.9%