There are 2 repositories under adversarial-defense topic.
Must-read Papers on Textual Adversarial Attack and Defense
auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"
A curated list of papers on adversarial machine learning (adversarial examples and defense methods).
A list of awesome resources for adversarial attack and defense method in deep learning
This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defense against all attacks using MNIST dataset.
[ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models
Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs
Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTorch).
Adversarial attacks on Deep Reinforcement Learning (RL)
[ICLR 2021] "InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective" by Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu
CVPR 2022 Workshop Robust Classification
Adversarial Distributional Training (NeurIPS 2020)
Machine Learning Attack Series
😎 A curated list of awesome real-world adversarial examples resources
This repository provide the studies on the security of language models for code (CodeLMs).
Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.
pytorch implementation of Parametric Noise Injection for adversarial defense
Learnable Boundary Guided Adversarial Training (ICCV2021)
[IEEE TIP 2021] Self-Attention Context Network: Addressing the Threat of Adversarial Attacks for Hyperspectral Image Classification
GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks
Code for the paper "Consistency Regularization for Certified Robustness of Smoothed Classifiers" (NeurIPS 2020)
Understanding Catastrophic Overfitting in Single-step Adversarial Training [AAAI 2021]
Source Code for 'SECurity evaluation platform FOR Speaker Recognition' released in 'Defending against Audio Adversarial Examples on Speaker Recognition Systems'
Adversarial Ranking Attack and Defense, ECCV, 2020.
Adversarial Attack and Defense in Deep Ranking, T-PAMI, 2024
[ECCV 2020] Pytorch codes for Open-set Adversarial Defense
Code for the paper "SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness" (NeurIPS 2021)
Enhancing Adversarial Robustness for Deep Metric Learning, CVPR, 2022
Minimal implementation of Denoised Smoothing (https://arxiv.org/abs/2003.01908) in TensorFlow.
Implementation of paper "Transferring Robustness for Graph Neural Network Against Poisoning Attacks".
LSA : Layer Sustainability Analysis framework for the analysis of layer vulnerability in a given neural network. LSA can be a helpful toolkit to assess deep neural networks and to extend the adversarial training approaches towards improving the sustainability of model layers via layer monitoring and analysis.
:computer: :bulb: Bachelor's Thesis on Adversarial Machine Learning Attacks and Defences
A Robust Adversarial Network-Based End-to-End Communications System With Strong Generalization Ability Against Adversarial Attacks