There are 1 repository under fgsm topic.
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
Implementation of Papers on Adversarial Examples
Detection by Attack: Detecting Adversarial Samples by Undercover Attack
Tensorflow Implementation of Adversarial Attack to Capsule Networks
PyTorch library for adversarial attack and training
This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defense against all attacks using MNIST dataset.
SHIELD: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression
The first real-world adversarial attack on MTCNN face detetction system to date
Implementation of gradient-based adversarial attack(FGSM,MI-FGSM,PGD)
Implementation of adversarial training under fast-gradient sign method (FGSM), projected gradient descent (PGD) and CW using Wide-ResNet-28-10 on cifar-10. Sample code is re-usable despite changing the model or dataset.
Detection of network traffic anomalies using unsupervised machine learning
Reproduce multiple adversarial attack methods
Paddle-Adversarial-Toolbox (PAT) is a Python library for Deep Learning Security based on PaddlePaddle.
implement Kervolutional Neural Networks (CVPR, 2019) and compare with CNN under the white box attack
六代兴亡如梦,苒苒惊时月。纵使岁寒途远,此志应难夺。
using adversarial attacks to confuse deep-chicken-terminator :shield: :chicken:
Adversarial Attack on 3D U-Net model: Brain Tumour Segmentation.
A Tensorflow adversarial machine learning attack toolkit to add perturbations and cause image recognition models to misclassify an image
Adversarial Attack using a DCGAN
Implementing white box adversarial attacks on parameters and architecture of CNN in PyTorch
WideResNet implementation on MNIST dataset. FGSM and PGD adversarial attacks on standard training, PGD adversarial training, and Feature Scattering adversarial training.
FGSM attack Pytorch module for semantic segmentation networks, with examples provided for Deeplab V3.
adversarial patch train by I-FGSM to attack on MTCNN face detection system
ECE C147: Neural Networks & Deep Learning. Repository for "Developing Robust Networks to Defend Against Adversarial Examples". Implementing adversarial data augmentation on CNNs and RNNs.
Simple example notebooks using PyTorch
Repository consists of pre-trained CNN model in pytorch, hitting 89% on Fashion MNIST dataset. Adversarial attack was implemented on a given model. Results are below.
A Comprehensive Study on Cloud-Based Model Interpretability, Accountability, and Privacy in Machine Learning with Resilience to Adversarial Attacks
Implementation of FGSM (Fast Gradient Sign Method) attack on fine-tuned MobileNet architecture trained for flood detection in images.
adversarial attack on malware detector