CVMLL-Awesome / awesome-physical-adversarial-examples

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Awesome Physical Adversarial Examples Awesome

This repo collects papers about Physical Adversarial Examples for anyone who wants to do research on it. We are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo. Special thanks to the Researchers who have contributed to this project!

Our paper can be found at here!πŸ˜„πŸ˜„πŸ˜„

The Differences between Digital and Physical Adversarial Examples

The Route Map of Recent Physical Adversarial Examples

The Category Tree of Physical Adversarial Attacks and Defenses

Table of Contents

[:fire:] indicates citation number, and [⭐] indicates Github star number.

Surveys

  • [2023][CoRR] Physically adversarial attacks and defenses in computer vision: A survey
  • [2023][arXiv] A survey on physical adversarial attack in computer vision
  • [2023][arXiv] Physical adversarial attack meets computer vision: A decade survey
  • [2021][IJMLC][77:fire:] Adversarial examples: attacks and defenses in the physical world
  • [2021][IEEE Access][132:fire:] Advances in adversarial attacks and defenses in computer vision: A survey
  • [2018][cybersecurity][44:fire:] A survey of practical adversarial example attacks

Papers-Physical Adversarial Attacks

Manufacturing Oriented Attacks

Touchable Attacks

  • [2023][CVPR] Physically adversarial infrared patches with learnable shapes and locations [2D]
  • [2023][AAAI] Hotcold block: Fooling thermal infrared detectors with a novel wearable design [2D][Code]
  • [2023][USENIX Security Symposium] X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detections [3D][Code]
  • [2023][CVPR] Towards Effective Adversarial Textured 3D Meshes on Physical Face Recognition [3D][Code]
  • [2023][CVPR] Physically Realizable Natural-Looking Clothing Textures Evade Person Detectors via 3D Modeling [3D][Code]
  • [2023][PR] Boosting transferability of physical attack against detectors by redistributing separable attention [3D][Code]
  • [2022][TPAMI] Simultaneously optimizing perturbations and positions for black-box adversarial patch attacks [2D][Code]
  • [2022][TPAMI][44:fire:] Adversarial Sticker: A Stealthy Attack Method in the Physical World [2D][Code]
  • [2022][CVPR] Dta: Physical camouflage attacks using differentiable transformation network [3D][Code]
  • [2022][NIPS] Isometric 3D Adversarial Examples in the Physical World [3D]
  • [2022][AAAI] FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack [3D][Code]
  • [2021][TCYB][81:fire:] Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples [2D][Code]
  • [2021][CVPR][97:fire:] Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World [3D][Code][42:star:]
  • [2020][CVPR][143:fire:] Universal Physical Camouflage Attacks on Object Detectors [3D][Code]
  • [2020][CVPR][169:fire:] Physically Realizable Adversarial Examples for LiDAR Object Detection [3D]
  • [2020][ECCV][264:fire:] Adversarial T-shirt! Evading Person Detectors in A Physical World [3D]
  • [2019][ACM TOPS][158:fire:] A General Framework for Adversarial Examples with Objectives [2D][Code]
  • [2019][CVPR Workshop][504:fire:] Fooling automated surveillance cameras: adversarial patches to attack person detection [2D][Code][98:star:]
  • [2019][ICML Workshop][137:fire:] On Physical Adversarial Patches for Object Detection [2D][Code]
  • [2019][CVPR][102:fire:] MeshAdv: Adversarial Meshes for Visual Recognition [3D]
  • [2018][USENIX Workshop][436:fire:] Physical Adversarial Examples for Object Detectors [2D]
  • [2018][CVPR][2142:fire:] Robust Physical-World Attacks on Deep Learning Visual Classification [2D]
  • [2018][ICLR][5492:fire:] Adversarial examples in the physical world [2D]
  • [2018][ICML][1608:fire:] Synthesizing Robust Adversarial Examples [3D][Code][63:star:]
  • [2017][arXiv][132:fire:] Adversarial Examples that Fool Detectors [2D]
  • [2016][ACM CCS][1638:fire:] Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition [2D]

Untouchable Attacks

  • [2023][ICCV] RFLA: A Stealthy Reflected Light Adversarial Attack in the Physical World [Lighting][Code]
  • [2023][CVPR] Physical-world optical adversarial attacks on 3d face recognition [Lighting]
  • [2022][CVPR][65:fire:] Shadows can be dangerous: Stealthy and effective physical-world adversarial attack by natural phenomenon [Lighting][Code]
  • [2022][arXiv] Adversarial color film: Effective physical-world attack to dnns [Lighting]
  • [2022][arXiv] Adversarial Zoom Lens: A Novel Physical-World Attack to DNNs [Lighting]
  • [2022][VR] SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image Classifiers [Lighting][Code]
  • [2022][WACV] Digital and Physical-World Attacks on Remote Pulse Detection [Lighting]
  • [2022][ICLR] Real-time neural voice camouflage [Audio/Speech][Code]
  • [2022][NIPS] VoiceBlock: Privacy through Real-Time Adversarial Attacks with Audio-to-Audio Models [Audio/Speech][Code]
  • [2021][USENIX Security Symposium][55:fire:] SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations [Lighting][Code]
  • [2021][ICCV Workshop] Optical Adversarial Attack [Lighting]
  • [2021][CVPR] Over-the-Air Adversarial Flickering Attacks against Video Recognition Networks [Lighting]
  • [2021][CVPR][77:fire:] Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink [Lighting][Code]
  • [2021][AAAI] Fooling thermal infrared pedestrian detectors in real world using small bulbs [Lighting]
  • [2021][CVPR][53:fire:] Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect [Lighting][Code]
  • [2021][arXiv] Light Lies: Optical Adversarial Attack [Lighting]
  • [2021][ICASSP] Attack on practical speaker verification system using universal adversarial perturbations [Audio/Speech][Code]
  • [2020][NIPS][55:fire:] Watch out! Motion is Blurring the Vision of Your Deep Neural Networks [Lighting][Code]
  • [2020][USENIX Security Symposium][164:fire:] Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems [Audio/Speech]
  • [2020][ICASSP][86:fire:] Real-time, Universal, and Robust Adversarial Attacks Against Speaker Recognition Systems [Audio/Speech]
  • [2020][HotMobile][72:fire:] Practical Adversarial Attacks Against Speaker Recognition Systems [Audio/Speech]
  • [2020][USENIX Security Symposium][116:fire:] Devil’s Whisper: A General Approach for Physical Adversarial Attacks against Commercial Black-box Speech Recognition Devices [Audio/Speech][Code]
  • [2019][S&P] Poster: Perceived Adversarial Examples [Lighting]
  • [2019][NIPS][57:fire:] Adversarial Music: Real World Audio Adversary Against Wake-word Detection System [Audio/Speech]
  • [2019][IJCAI][194:fire:] Robust Audio Adversarial Example for a Physical Attack [Audio/Speech][Code]
  • [2019][ICML][397:fire:] Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition [Audio/Speech][Code]
  • [2019][arXiv] Perceptual Based Adversarial Audio Attacks [Audio/Speech]
  • [2018][AAAI Symposium] Projecting Trouble: Light Based Adversarial Attacks on Deep Learning Classifiers [Lighting]
  • [2018][S&P Workshop][1149:fire:] Audio Adversarial Examples: Targeted Attacks on Speech-to-Text [Audio/Speech][Code][257:star:]

Resampling Oriented Attacks

Environment-oriented Attacks

  • [2022][CVPR] Dta: Physical camouflage attacks using differentiable transformation network [Code]
  • [2021][CVPR][53:fire:] Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect [Code]
  • [2021][AAAI] Towards Universal Physical Attacks on Single Object Tracking
  • [2021][WACV] Physical Adversarial Attacks on an Aerial Imagery Object Detector
  • [2021][ICASSP] Attack on practical speaker verification system using universal adversarial perturbations [Code]
  • [2020][ICASSP][86:fire:] Real-time, Universal, and Robust Adversarial Attacks Against Speaker Recognition Systems
  • [2020][HotMobile][72:fire:] Practical Adversarial Attacks Against Speaker Recognition Systems
  • [2019][NIPS][57:fire:] Adversarial Music: Real World Audio Adversary Against Wake-word Detection System
  • [2019][ICML][397:fire:] Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition [Code]

Sampler-oriented Attacks

  • [2022][NIPS] Viewfool: Evaluating the robustness of visual recognition to adversarial viewpoints [Code]
  • [2022][CVPR][23:fire:] Infrared invisible clothing: Hiding from infrared detectors at multiple angles in real world
  • [2022][CVPR][46:fire:] Adversarial Texture for Fooling Person Detectors in the Physical World [Code]
  • [2022][ArXiv] Attacking object detector using a universal targeted label-switch patch
  • [2022][AAAI] Learning coated adversarial camouflages for object detectors
  • [2021][CVPR][62:fire:] The Translucent Patch: A Physical and Universal Attack on Object Detectors
  • [2020][ECCV][264:fire:] Adversarial T-shirt! Evading Person Detectors in A Physical World [Code]
  • [2020][AAAI][92:fire:] Robust adversarial objects against deep learning models [Code]
  • [2019][ICML][134:fire:] Adversarial camera stickers: A physical camera-based attack on deep learning systems [Code]
  • [2019][ICML][397:fire:] Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition [Code]
  • [2019][arXiv] Perceptual Based Adversarial Audio Attacks
  • [2019][NIPS][57:fire:] Adversarial Music: Real World Audio Adversary Against Wake-word Detection System
  • [2019][CVPR][130:fire:] Adversarial attacks beyond the image space
  • [2019][ICLR][86:fire:] Camou: Learning physical vehicle camouflages to adversarially attack detectors in the wild [Code]
  • [2019][ICCV][42:fire:] AdvPattern: physical-world attacks on deep person re-identification via adversarially transformable patterns [Code]
  • [2018][USENIX Workshop][436:fire:] Physical Adversarial Examples for Object Detectors
  • [2018][CVPR][2142:fire:] Robust Physical-World Attacks on Deep Learning Visual Classification [Code][96:star:]
  • [2018][ECML-PKDD][390:fire:] Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector[Code][154:star:]
  • [2018][ICML][1608:fire:] Synthesizing Robust Adversarial Examples [Code][63:star:]

Other PAEs

The Natural Physical Adversarial Attacks

  • [2023][CVPR] Towards benchmarking and assessing visual naturalness of physical world adversarial attacks [Generative][Code]
  • [2023][CVPR] Physically Realizable Natural-Looking Clothing Textures Evade Person Detectors via 3D Modeling [Optimization-based]
  • [2022][NIPS] Adv-attribute: Inconspicuous and transferable adversarial attack on face recognition [Generative]
  • [2022][IEEE TIFS] TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems [Generative]
  • [2022][ECCV][39:fire:] Physical attack on monocular depth estimation with optimal adversarial patches [Optimization-based]
  • [2021][ICCV][62:fire:] Naturalistic physical adversarial patch for object detectors [Generative]
  • [2021][IEEE IoT Journal] Inconspicuous adversarial patches for fooling image-recognition systems on mobile devices [Generative]
  • [2021][ACM MM] Legitimate adversarial patches: Evading human eyes and detection models in the physical world [Optimization-based]
  • [2020][CVPR][142:fire:] Universal physical camouflage attacks on object detectors [Optimization-based][Code]
  • [2020][CVPR][178:fire:] Adversarial camouflage: Hiding physical-world attacks with natural styles [Optimization-based][Code][78:star:]
  • [2020][ECCV][136:fire:] Semanticadv: Generating adversarial examples via attribute-conditioned image editing [Generative][Code][52:star:]
  • [2019][AAAI][228:fire:] Perceptual-sensitive gan for generating adversarial patches [Generative][Code]

The Transferable Physical Adversarial Attacks

  • [2023][CVPR] T-sea: Transferbased self-ensemble attack on object detection [Optimization-based][Code]
  • [2023][Pattern Recognition] Boosting transferability of physical attack against detectors by redistributing separable attention [Optimization-based][Code]
  • [2022][IEEE TIFS] TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems [Generative]
  • [2021][CVPR][97:fire:] Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World [Optimization-based][Code][42:star:]

The Generalized Physical Adversarial Attacks

  • [2023][ICCV] ACTIVE: Towards Highly Transferable 3D Physical Camouflage for Universal and Robust Vehicle Evasion [Optimization-based]
  • [2022][IEEE TIFS] TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems [Generative]
  • [2021][IEEE TIP] Universal adversarial patch attack for automatic checkout using perceptual and attentional bias [Generative][Code]
  • [2020][ECCV][97:fire:] Bias-based Universal Adversarial Patch Attack for Automatic Check-out [Generative][Code]
  • [2020][ECCV][199:fire:] Making an invisibility cloak: Real world adversarial attacks on object detectors [Optimization-based]
  • [2019][ICCV][61:fire:] Physical adversarial textures that fool visual object tracking [Optimization-based]
  • [2019][AAAI][56:fire:] Connecting the digital and physical world: Improving the robustness of adversarial attacks [Generative][Code]

Papers-Adversarial Defense Methods

Data-end Defense

Adversarial Detecting

  • [2023][WACV] Patchzero: Defending against adversarial patch attacks by detecting and zeroing the patch
  • [2022][ACM MM] Defending physical adversarial attack on object detection via adversarial patch-feature energy
  • [2022][CVPR][23:fire:] Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection [Code]
  • [2021][ACM MM] Adversarial pixel masking: A defense against physical attacks for pre-trained object detectors
  • [2021][IEEE INFOCOM] Detecting localized adversarial examples: A generic approach using critical region analysis
  • [2021][arXiv] Adversarial yolo: Defense human detection patch attacks via detecting adversarial patches
  • [2020][IEEE S&P Workshop][154:fire:] Sentinet: Detecting localized universal attacks against deep learning systems

Adversarial Denoising

  • [2023][WACV] Patchzero: Defending against adversarial patch attacks by detecting and zeroing the patch
  • [2021][ACM CCS][29:fire:] Detectorguard: Provably securing object detectors against localized patch hiding attacks [Code]
  • [2020][ACNS Workshop][49:fire:] Minority reports defense: Defending against adversarial patches
  • [2019][WACV][115:fire:] Local gradients smoothing: Defense against localized adversarial attacks

Adversarial Prompting

  • [2023][CVPR] Angelic patches for improving third-party object detector performance [Code]
  • [2023][ICASSP] Visual prompting for adversarial robustness [Code]
  • [2023][ICASSP] Amicable aid: Perturbing images to improve classification performance
  • [2022][CVPR] Defensive patches for robust recognition in the physical world [Code]
  • [2022][AAAI] Preemptive image robustification for protecting users against man-in-the-middle adversarial attacks [Code]
  • [2021][NIPS][35:fire:] Unadversarial examples: Designing objects for robust vision [Code][99:star:]

Model-end Defense

Adversarial Training

  • [2023][arXiv] Adversarial Training of Self-supervised Monocular Depth Estimation against Physical-World Attacks
  • [2022][IEEE ComSoc] Countering physical eavesdropper evasion with adversarial training
  • [2021][ICML workshop] Meta adversarial training against universal patches [Code]
  • [2020][ICLR][97:fire:] Defending against physically realizable attacks on image classification [Code]
  • [2020][ECCV Workshop][67:fire:] Adversarial training against location optimized adversarial patches [Code]

Model Modification

  • [2023][IEEE FG] Unified detection of digital and physical face attacks
  • [2023][ICASSP][33:fire:] Defending against universal patch attacks by restricting token attention in vision transformers
  • [2023][arXiv] Diffender: Diffusion-based adversarial defense against patch attacks in the physical world
  • [2023][Remote sensing] Defense against adversarial patch attacks for aerial image semantic segmentation by robust feature extraction
  • [2022][arXiv] Dddm: a brain-inspired framework for robust classification
  • [2021][ICCV] Defending against universal adversarial patches by clipping feature norms
  • [2021][USENIX Security Symposium][76:fire:] Patchguard: A provably robust defense against adversarial patches via small receptive fields and masking [Code][55:star:]
  • [2020][DSN Workshops] Blurnet: Defense by filtering the feature maps

Certified Robustness

  • [2022][arXiv] Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation
  • [2022][USENIX Security Symposium][32:fire:] Patchcleanser: Certifiably robust defense against adversarial patches for any image classifier [Code]
  • [2022][CVPR][43:fire:] Towards practical certifiable patch defense with vision transformer
  • [2021][ICLR][28:fire:] Efficient certified defenses against patch attacks on image classifiers
  • [2020][IEEE S&P Workshop][36:fire:] Clipped bagnet: Defending against sticker attacks with clipped bag-of-features
  • [2020][arXiv][135:fire:] Certified defenses for adversarial patches

Our_Team

Our team is suuported by the ZGC Lab and the DIG group of the State Key Laboratory of Software Development Environment (SKLSDE), supervised Prof. Xianglong Liu. The main research goals of our team is Security and Trustworthy AI.

Current Members

Haojie Hao

  • Haojie Hao is a senior student at Beihang University. His research interests include AI safety, adversarial attacks on language models and multimodal models.

Zhengquan Sun

  • Zhengquan Sun is a senior student at Beihang University, majoring in artificial intelligence. He is about to continue pursuing his doctoral degree at Beihang, and his current research direction is measurement and interpretable analysis of complex intelligent systems. He is committed to developing safer and more reliable artificial intelligence.

Jin Hu

  • Jin Hu is currently working towards the degree of Doctor of Engineering in the School of Computer Science and Engineering, Beihang University and Zhongguancun Laboratory. His research interests include Adversarial Attack, Generative Modeling and Trustworthy AI.

Siyang wu

  • Siyang Wu is currently working towards the degree of Doctor of Engineering in Electronic and Information in the School of Computer Science and Engineering, Beihang University and Zhongguancun Laboratory. His research interests include AI safety, adversarial defense and object detection.

Zixin Yin

  • Zixin Yin is currently pursuing a Master's Degree in the School of Computer Science and Engineering at Beihang University. Zixin's current academic focus revolves around physical adversarial attacks and defenses, as well as the model trustworthiness.

Alumnus

Supervisors

Jiakai Wang

  • Jiakai is now a Research Scientist in ZGC Lab, Beijing, China. He received the Ph.D. degree in 2022 from Beihang University (Summa Cum Laude), supervised by Prof. Wei Li and Prof. Xianglong Liu. Before that, he obtained my BSc degree in 2018 from Beihang University (Summa Cum Laude). His research interests are Trustworthy AI in Computer Vision (mainly) and Multimodal Machine Learning, including Physical Adversarial Attacks and Defense, Transferable Adversarial Examples, and Security of Practical AI.

Aishan Liu

  • Aishan is an Assistant Professor in the State Key Laboratory of Software Development Environment, Department of Computer Science and Engineering at Beihang University. His research interestes are centered around AI Safety and Security, with broad interests in the areas of Adversarial Examples, Backdoor Attacks, Interpretable Deep Learning , Model Robustness, Fairness Testing, AI Testing and Evaluation, and their applications in real-world scenarios.

Xianglong Liu

  • Xianglong Liu is a Full Professor in School of Computer Science and Engineering at Beihang University. He received BS and Ph.D degrees under supervision of Prof. Wei Li, and visited DVMM Lab, Columbia University as a joint Ph.D student supervised by Prof. Shih-Fu Chang. His research interests include fast visual computing (e.g., large-scale search/understanding) and robust deep learning (e.g., network quantization, adversarial attack/defense, few shot learning). He received NSFC Excellent Young Scientists Fund, and was selected into 2019 Beijing Nova Program, MSRA StarTrack Program, and 2015 CCF Young Talents Development Program.

Collaborators

Donghua Wang

  • Donghua Wang is currently pursuing the Ph.D. degree with the College of Computer Science and Technology, Zhejiang University, Hangzhou, China. His research interests include adversarial machine learning, AI safety, and image processing.

[Tingsong Jiang]

  • Tingsong Jiang received the B.Sc. and Ph.D. degrees from the School of Electronics Engineering and Computer Science, Peking University, Beijing, China, in 2010 and 2017, respectively. He is currently an Assistant Professor with the Defense Innovation Institute, Chinese Academy of Military Science, Beijing. His research interests include adversarial machine learning, AI safety, and knowledge graph.

Contributors

Waiting for your contribution ! πŸ˜„πŸ˜„πŸ˜„

About

License:MIT License