matteo-giri / cybersecurity-project

Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers: Verification and Testing (university project for Cybersecurity)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers: Verification and Testing

Introduction

The purpose of this work is to implement and test the paper "Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers" by Giorgio Severi, Jim Meyer, Scott Coull, Alina Oprea, in order to verify the results obtained and draw appropriate conclusions.

Specifically, several significant attacks proposed in the research against Malware Classifiers will be tested, including an "unrestricted attack," a "transfer attack," and a "constrained attack," explained in more detail in the project report. Furthermore, the effectiveness of some mitigation methods to defend against these attacks will be verified, aiming to approximate the actual risk that a Backdoor Poisoning Attack poses to the security of malware classifiers.

The report will present the results of the implementation and tests, providing an evaluation of the effectiveness of the attacks and their respective defenses, all contextualized and compared with what was obtained by the authors of the paper.

Read the full report here (in Italian)

About

Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers: Verification and Testing (university project for Cybersecurity)