Jenson66 / Poisoning-Attack-on-FL

Code for Paper "Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

README

We have provided a simple demonstration of F-FMPA (free to migrate it to other FL scenarios). It aims to launch precise model poisoning attacks (MPAs) in federated learning.

If you have any questions, feel free to discuss them in the issue.

Test:

  • run **FMPA.py ** for attacking federated learning.

Dependencies:

python==3.6.13

pytorch==1.10.2

torchvision==0.9.0

numpy==1.19.5

pandas==1.1.5

pickleshare==0.7.4

We introduce three attack primitives:

图片1

About

Code for Paper "Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning"


Languages

Language:Python 100.0%