This repository contains the code and resources for the research project on 'Image Scaling Attacks on Machine Learning Algorithms: A Cyber Security Perspective'. It explores the susceptibility of ML algorithms to image scaling attacks, a type of adversarial attack that manipulates the size and resolution of input images to induce incorrect model predictions. The research focuses on traffic sign recognition systems using the German Traffic Sign Recognition Benchmark (GTSRB) dataset. It aims to improve the robustness of these systems against such adversarial manipulations and test adversarial attack strategies on neural network-based classifiers, primarily Convolutional Neural Networks (CNNs) built using Keras. For more detailed information, please refer to the research paper included in this repository.
Pictures
: Contains the pictures used for attacking.GTSRB model.keras
: Contains the CNN Keras model trained on the GTSRB dataset.(01) GTSRB Model.ipynb
: Jupyter notebook for data analysis, model training, and evaluation.(02) Interpolations.ipynb
: Jupyter notebook for finding vulnerable interpolations.(03) Image Scaling Attacks.ipynb
: Jupyter notebook for implementing image scaling attacks.(04) Defenses.ipynb
: Jupyter notebook for defense mechanisms for image scaling attacks.README.md
: Project overview and instructions.
- Python 3.12 or higher
- IDE: Jupyter Notebook
- Required libraries: OpenCV, Pillow, TensorFlow, Keras, NumPy
- System requirements: 16GB of RAM and a GPU
We have tested the attack on Windows 10/11 and Ubuntu.
To set up the environment for this project, follow these steps:
-
Clone the repository:
git clone https://github.com/waniashafqat/Image-Scaling-Attacks-on-Machine-Learning-Algorithms.git cd Image-Scaling-Attacks-on-Machine-Learning-Algorithms
-
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
- Download the GTSRB dataset from https://www.kaggle.com/datasets/meowmeowmeowmeowmeow/gtsrb-german-traffic-sign
- Preprocess train the model using the
GTSRB Model.ipynb
- For finding vulnerabilities in ML model run
Interpolations.ipynb
- Implement image scaling attacks on GTSRB model using
Image Scaling Attacks.ipynb
- For defense mechanisms use
Defenses.ipynb
- Dataset and Model: Utilizes the GTSRB dataset and Keras-based CNN models.
- Attack Design: Focuses on creating adversarial images using image scaling techniques.
- Perturbations and Norms: Implements L0, L2, and L∞ norms to generate minimal but effective perturbations.
- Interpolation Techniques: Employs various interpolation methods to understand their impact on attack effectiveness.
Several defense mechanisms are proposed to counteract image scaling attacks:
- Pixel-wise Difference
- Structural Similarity Index (SSIM)
- Color Histogram-Based Detection
- Robust Scaling Algorithms
We welcome contributions to improve the project. Please fork the repository and create a pull request with your changes.
This project is licensed under the MIT License - see the LICENSE file for details.
For any questions or feedback, please contact: