jkartzman / plexiglass

A PyTorch toolbox for adversarial attack and deepfake detection research.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

plexiglass

Plexiglass

PyPI version license MIT

a PyTorch toolbox for cybersecurity research and testing against adversarial attacks and deepfakes.

Installation

The first stable release is version 1.2.0.

To download the package from PyPi:

pip install --upgrade plexiglass

Usage

Tl:dr plexiglass.adversarial contains adversarial attacks and plexiglass.detectors contains deepfake detectors. Please refer to demo.ipynb for a detailed example.

Simple Usage

A simple way to test a model's robustness to adversarial attacks is to call test_robustness, which outputs a model's accuracy before and after the attack.

import torch
import torch.nn as nn
from plexiglass.adversarial import FGSM, test_robustness 

device = torch.device("cuda" if use_cuda else "cpu")

# ... ... ... #
# load model  #
# ... ... ... #

# fast_gradient_sign_method
model.eval()
attack = FGSM(model=model, loss = nn.CrossEntropyLoss(), eps=0.001, device=device)

# single test_robustness to calculate model accuracy given attack method
accuracy = test_robustness(model=model, attack=attack, dataloader=loader, device=device)

Manual Testing

Alternatively, you can call the predefined method of attack to get the perturbed image for manual testing functions.

import torch
import torch.nn as nn
from plexiglass.adversarial import FGSM, test_robustness

device = torch.device("cuda" if use_cuda else "cpu")

# fast_gradient_sign_method
model.eval()
attack = FGSM(model=model, loss = nn.CrossEntropyLoss(), eps=0.001, device=device)

# alternatively, you can call attack to get the perturbed image
for images, labels in loader:
    perturbed_images = attack(images, labels).to(device)
    outputs = model(perturbed_images)
    labels = labels.to(device)

    # calculate accuracy

Deepfake Detection

Deepfake detectors are also available for training in Plexiglass. Currently, only MesoNet/ MesoInception are available for use.

import torch
import torch.nn as nn
from plexiglass.detectors import MesoInception

model = MesoInception()

To request new features, please submit an issue

About

A PyTorch toolbox for adversarial attack and deepfake detection research.

License:MIT License


Languages

Language:Python 55.4%Language:Jupyter Notebook 44.6%