shashankkotyan / DualQualityAssessment

This github repository contains the official code for the papers, "Robustness Assessment for Adversarial Machine Learning: Problems, Solutions and a Survey of Current Neural Networks and Defenses" and "One Pixel Attack for Fooling Deep Neural Networks"

Home Page:https://arxiv.org/abs/1906.06026

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Dual Quality Assessment

This GitHub repository contains the official code for the papers,

Adversarial robustness assessment: Why in evaluation both L0 and L∞ attacks are necessary
Shashank Kotyan and Danilo Vasconcellos Vargas,
PLOS One (2022).

One pixel attack for fooling deep neural networks
Jiawei Su, Danilo Vasconcellos Vargas, Kouichi Sakurai
IEEE Transactions on Evolutionary Computation (2019).

Citation

If this work helps your research and/or project in anyway, please cite:

@article{kotyan2022adversarial,
  title={Adversarial robustness assessment: Why in evaluation both L 0 and L∞ attacks are necessary},
  author={Kotyan, Shashank and Vargas, Danilo Vasconcellos},
  journal={PloS one},
  volume={17},
  number={4},
  pages={e0265723},
  year={2022},
  publisher={Public Library of Science San Francisco, CA USA}
}

@article{su2019one,
  title     = {One pixel attack for fooling deep neural networks},
  author    = {Su, Jiawei and Vargas, Danilo Vasconcellos and Sakurai, Kouichi},
  journal   = {IEEE Transactions on Evolutionary Computation},
  volume    = {23},
  number    = {5},
  pages     = {828--841},
  year      = {2019},
  publisher = {IEEE}
}

Testing Environment

The code is tested on Ubuntu 18.04.3 with Python 3.7.4.

Getting Started

Requirements

To run the code in the tutorial locally, it is recommended,

  • a dedicated GPU suitable for running, and
  • install Anaconda.

The following python packages are required to run the code.

  • cma==2.7.0
  • matplotlib==3.1.1
  • numpy==1.17.2
  • pandas==0.25.1
  • scipy==1.4.1
  • seaborn==0.9.0
  • tensorflow==2.1.0

Steps

  1. Clone the repository.
git clone https://github.com/shashankkotyan/DualQualityAssessment.git
cd ./DualQualityAssessment
  1. Create a virtual environment
conda create --name dqa python=3.7.4
conda activate dqa
  1. Install the python packages in requirements.txt if you don't have them already.
pip install -r ./requirements.txt
  1. Run an adversarial attack with the following command.

    a) Run the Pixel Attack with the following command

    python -u code/run_attack.py pixel [ARGS] > run.txt

    b) Run the Threshold Attack with the following command

    python -u code/run_attack.py threshold [ARGS] > run.txt

Arguments for run_attack.py

TBD

Notes

TBD

Milestones

  • Tutorials
  • Addition of Comments in the Code
  • Cross Platform Compatibility
  • Description of Method in Readme File

License

Dual Quality Assessment is licensed under the MIT license. Contributors agree to license their contributions under the MIT license.

Contributors and Acknowledgements

TBD

Reaching out

You can reach me at shashankkotyan@gmail.com or @shashankkotyan. If you tweet about Dual Quality Assessment, please use one of the following tags #pixel_attack, #threshold_attack, #dual_quality_assessment, and/or mention me (@shashankkotyan) in the tweet. For bug reports, questions, and suggestions, use Github issues.

About

This github repository contains the official code for the papers, "Robustness Assessment for Adversarial Machine Learning: Problems, Solutions and a Survey of Current Neural Networks and Defenses" and "One Pixel Attack for Fooling Deep Neural Networks"

https://arxiv.org/abs/1906.06026

License:MIT License


Languages

Language:Python 98.7%Language:Shell 1.3%