There are 3 repositories under adversarial-robustness topic.
RobustBench: a standardized adversarial robustness benchmark [NeurIPS'21 Benchmarks and Datasets Track]
Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
EasyRobust: an Easy-to-use library for state-of-the-art Robust Computer Vision Research with PyTorch.
alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, and 2023)
Square Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
[TPAMI2022 & NeurIPS2020] Official implementation of Self-Adaptive Training
[CVPR 2022] "Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level Physically-Grounded Augmentations" by Tianlong Chen*, Peihao Wang*, Zhiwen Fan, Zhangyang Wang
Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs
Unofficial implementation of the DeepMind papers "Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples" & "Fixing Data Augmentation to Improve Adversarial Robustness" in PyTorch
[CVPR 2020] Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
Feature Scattering Adversarial Training (NeurIPS19)
[ICLR 2021] "Robust Overfitting may be mitigated by properly learned smoothening" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, Shiyu Chang, Zhangyang Wang
[ICML 2021] This is the official github repo for training L_inf dist nets with high certified accuracy.
Contains notebooks for the PAR tutorial at CVPR 2021.
Implementing the algorithm from our paper: "A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning".
[ICLR 2022] "Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?" by Yonggan Fu, Shunyao Zhang, Shang Wu, Cheng Wan, Yingyan Lin
[ICLR 2020] ”Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference“
[ICLR 2022] Training L_inf-dist-net with faster acceleration and better training strategies
Adversarial Attack and Defense in Deep Ranking, T-PAMI, 2024
Decoupled Kullback-Leibler Divergence Loss (DKL)
Revisiting Residual Networks for Adversarial Robustness: An Architectural Perspective
This repository contains the official implementation of the paper "Reliable Graph Neural Networks via Robust Aggregation" (NeurIPS, 2020).
PyTorch implementation of Targeted Adversarial Perturbations for Monocular Depth Predictions (in NeurIPS 2020)
Contact: Alexander Hartl, Maximilian Bachl, Fares Meghdouri. Explainability methods and Adversarial Robustness metrics for RNNs for Intrusion Detection Systems. Also contains code for "SparseIDS: Learning Packet Sampling with Reinforcement Learning" (branch "rl").
Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"
[ICML 2023] "NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations" by Yonggan Fu, Ye Yuan, Souvik Kundu, Shang Wu, Shunyao Zhang, Yingyan (Celine) Lin
Connecting Interpretability and Robustness in Decision Trees through Separation
[ICLR 2022] Boosting Randomized Smoothing with Variance Reduced Classifiers