This repository contains code that implements secure deep learning models robust to adversarial samples
This folder contains the different models that we built during the project. We tried different architectures to check which were more resilient to adversarial attacks
This folder contains the defense mechanisms we
used to defend the models stored in models
folder against selected adversarial attacks.
This folder contains the different attack strategies that we launched to the different models stored in the models
folder.