There are 2 repositories under adversarial-defense topic.
Must-read Papers on Textual Adversarial Attack and Defense
auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"
A curated list of papers on adversarial machine learning (adversarial examples and defense methods).
A list of awesome resources for adversarial attack and defense method in deep learning
This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defense against all attacks using MNIST dataset.
Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs
CVPR 2022 Workshop Robust Classification
Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTorch).
Adversarial attacks on Deep Reinforcement Learning (RL)
Adversarial Distributional Training (NeurIPS 2020)
😎 A curated list of awesome real-world adversarial examples resources
Machine Learning Attack Series
pytorch implementation of Parametric Noise Injection for adversarial defense
Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.
GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks
Learnable Boundary Guided Adversarial Training (ICCV2021)
Code for the paper "Consistency Regularization for Certified Robustness of Smoothed Classifiers" (NeurIPS 2020)
Understanding Catastrophic Overfitting in Single-step Adversarial Training [AAAI 2021]
Adversarial Ranking Attack and Defense, ECCV, 2020.
[ECCV 2020] Pytorch codes for Open-set Adversarial Defense
Adversarial Attack and Defense in Deep Ranking, T-PAMI, 2024
Minimal implementation of Denoised Smoothing (https://arxiv.org/abs/2003.01908) in TensorFlow.
Enhancing Adversarial Robustness for Deep Metric Learning, CVPR, 2022
Implementation of paper "Transferring Robustness for Graph Neural Network Against Poisoning Attacks".
:computer: :bulb: Bachelor's Thesis on Adversarial Machine Learning Attacks and Defences
LSA : Layer Sustainability Analysis framework for the analysis of layer vulnerability in a given neural network. LSA can be a helpful toolkit to assess deep neural networks and to extend the adversarial training approaches towards improving the sustainability of model layers via layer monitoring and analysis.
A Robust Adversarial Network-Based End-to-End Communications System With Strong Generalization Ability Against Adversarial Attacks