ApGa / adversarial_deepfakes

Deepfakes with an adversarial twist.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

adversarial_deepfakes

Deepfakes with an adversarial twist.

This repository provides code and additional materials for the paper: "Adversarial perturbations fool deepfake detectors", Apurva Gandhi and Shomik Jain, To Appear in IJCNN 2020.

The paper uses adversarial perturbations to enhance deepfake images and fool common deepfake detectors. We also explore two improvements to deepfake detectors: (i) Lipschitz regularization, and (ii) Deep Image Prior (DIP).

Link to preprint: https://arxiv.org/abs/2003.10596.

Files:

  • adv_examples.py: Adversarial Examples Creation
  • classifier.py: Deepfake Detector Creation
  • cw.py: Carlini-Wagner L2 Norm Attack
  • dip_template.py: Deep Image Prior Defense
  • evaluation.py: Model Evaluation Script
  • generate_dataset.py: Deepfake Generation Script
  • ijcnn_presentation.pdf: Presentation Slides from IJCNN 2020

References:

About

Deepfakes with an adversarial twist.

License:MIT License


Languages

Language:Python 100.0%