Vishu26 / Adversarial-Attacks

Machine learning classifiers are highly vulnerable to adversarial examples. We would like to build a robust classifier which can correctly classify adversarial examples.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Adversarial-Attacks

Nontargeted Adversarial Attacks

The goal of the non-targeted attack is to slightly modify source image in a way that image will be classified incorrectly by generally unknown machine learning classifier.

Targeted Adversarial Attacks

The goal of the targeted attack is to slightly modify source image in a way that image will be classified as specified target class by generally unknown machine learning classifier.

Defense Against Adversarial Attack

The goal of the defense is to build machine learning classifier which is robust to adversarial example, i.e. can classify adversarial images correctly.

References

EXPLORING THE SPACE OF BLACK-BOX ATTACKS ON DEEP NEURAL NETWORKS - pdf

Adversarial Examples: Attacks and Defenses for Deep Learning - pdf

Practical Black-Box Attacks against Machine Learning - pdf

Examples

alt text alt text alt text alt text alt text

About

Machine learning classifiers are highly vulnerable to adversarial examples. We would like to build a robust classifier which can correctly classify adversarial examples.


Languages

Language:Jupyter Notebook 100.0%