Y (Cynthia-Gxy)

Cynthia-Gxy

Geek Repo

Github PK Tool:Github PK Tool

Y's starred repositories

RE-paper

Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing

Language:PythonStargazers:10Issues:0Issues:0

blades

Blades: A Unified Benchmark Suite for Byzantine Attacks and Defenses in Federated Learning

Language:PythonLicense:Apache-2.0Stargazers:124Issues:0Issues:0

athena

Java后端知识图谱🔥 帮助Java初学者成长

License:Apache-2.0Stargazers:18719Issues:0Issues:0

lanlanInterview

此仓库将包含各大银行的基本介绍,笔试面试特点,发现这个宝库就离上岸不远了,哼

Language:HTMLStargazers:1019Issues:0Issues:0

easyFL

An experimental platform for federated learning.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:495Issues:0Issues:0

DeepTraffic

Deep Learning models for network traffic classification

Language:PythonLicense:MPL-2.0Stargazers:659Issues:0Issues:0

backdoor-learning-resources

A list of backdoor learning resources

License:MITStargazers:1022Issues:0Issues:0

Robust-and-Fair-Federated-Learning

Implementing the algorithm from our paper: "A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning".

Language:PythonLicense:CC0-1.0Stargazers:33Issues:0Issues:0

backdoors101

Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.

Language:PythonLicense:MITStargazers:324Issues:0Issues:0

backdoor_federated_learning

Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)

Language:PythonLicense:MITStargazers:267Issues:0Issues:0

DBA

DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)

Language:PythonStargazers:170Issues:0Issues:0

backdoor

Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and Privacy 2019.

Language:PythonLicense:MITStargazers:262Issues:0Issues:0

ADBI_CAPSTONE_Project

Federated learning is inherently vulnerable to having the integrity of the global model compromised because the training data from which the model parameter updates have been derived (if they were not somehow artificially synthesized) is not available to evaluate the validity of the updates during the aggregation process. An adversary may attempt to poison the global model with updates that aim to weaken the ability of the model to classify accurately. In order to protect against such attacks, the various possible types of attacks must be enumerated, their most probable effects on the model updates identified, and appropriate countermeasures put in place to minimize the likelihood that such updates will be aggregated into the global model while maximizing the likelihood that at least a minimal proportion of legitimate updates will be accepted. In this work we explore these issues by simulating a visual federated learning environment that is being attacked by one or more malicious agents performing two types of targeted attacks, i.e. attacks whose goal is the misclassification of a subset of images while more or less preserving the overall performance of the global model. We implemented a mechanism to detect anomalous model updates and prevent their inclusion in the global model and compared the performance of the global model after training with and without this mechanism enabled.

Language:Jupyter NotebookStargazers:1Issues:0Issues:0

ResistancePoisoningFederatedMalwareClassifier

Mobile devices contain highly sensitive data, making them an attractive target to attackers. As an Android malware classifier, LiM aims to tackle security issues while respecting the privacy of users by leveraging the power of federated learning. Compared to centralized ways of learning, the unique properties of federated learning open up new attack surfaces for adversaries. For instance, an adversary can attempt to let a targeted malicious app be misclassified as clean by sending poisoned model updates in the federation. This work builds on LiM with the aim of improving its resistance against these poisoning attacks. First, I formulate and test several targeted model update poisoning attacks. Depending on assumptions regarding the adversary's knowledge, the attacks are able to successfully compromise around 10 to 25\% of the honest client devices in the federation. Second, while most defenses result in a trade-off between improving resistance and maintaining performance, I propose a simple defense strategy that can never decrease the performance of the federation. Against a strong adversary, who has knowledge of the algorithm used to aggregate the model updates, the defense was mostly insufficient to prevent poisoning. In the presence of a more realistic adversary, the defense caused LiM to regain best-case performance, comparable to the performance in a scenario without adversary.

Language:Jupyter NotebookStargazers:7Issues:0Issues:0

FL-WBC

Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective".

Language:PythonStargazers:37Issues:0Issues:0

DataPoisoning_FL

Code for Data Poisoning Attacks Against Federated Learning Systems

Language:PythonStargazers:156Issues:0Issues:0

USTC-TK2016

Toolkit for processing PCAP file and transform into image of MNIST dataset

Language:PythonLicense:MPL-2.0Stargazers:196Issues:0Issues:0

PAA

A PyTorch implementation of the paper `Probabilistic Anchor Assignment with IoU Prediction for Object Detection` ECCV 2020 (https://arxiv.org/abs/2007.08103)

Language:PythonLicense:NOASSERTIONStargazers:247Issues:0Issues:0

a-neural-algorithm-of-artistic-style

Keras implementation of "A Neural Algorithm of Artistic Style"

Language:Jupyter NotebookLicense:MITStargazers:117Issues:0Issues:0

PyTorch-Multi-Style-Transfer

Neural Style and MSG-Net

Language:Jupyter NotebookLicense:MITStargazers:974Issues:0Issues:0

ares

A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.

Language:PythonLicense:Apache-2.0Stargazers:470Issues:0Issues:0

Anomaly-Detection-and-Attack-Identification-in-Network-Traffic-Based-on-Graph

A project from EECS6414M of Winter 2020 at York University

Language:PythonStargazers:11Issues:0Issues:0

awesome-graph-classification

A collection of important graph embedding, classification and representation learning papers with implementations.

Language:PythonLicense:CC0-1.0Stargazers:4723Issues:0Issues:0

ML_Malware_detect

阿里云安全恶意程序检测比赛

Language:PythonStargazers:2Issues:0Issues:0

EvolveGCN

Code for EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs

Language:PythonLicense:Apache-2.0Stargazers:507Issues:0Issues:0

Graph-Neural-Network-Note

A blog for understanding graph neural network

License:MITStargazers:331Issues:0Issues:0
Language:CStargazers:1Issues:0Issues:0