Y's starred repositories
lanlanInterview
此仓库将包含各大银行的基本介绍,笔试面试特点,发现这个宝库就离上岸不远了,哼
DeepTraffic
Deep Learning models for network traffic classification
backdoor-learning-resources
A list of backdoor learning resources
Robust-and-Fair-Federated-Learning
Implementing the algorithm from our paper: "A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning".
backdoors101
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
backdoor_federated_learning
Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)
ADBI_CAPSTONE_Project
Federated learning is inherently vulnerable to having the integrity of the global model compromised because the training data from which the model parameter updates have been derived (if they were not somehow artificially synthesized) is not available to evaluate the validity of the updates during the aggregation process. An adversary may attempt to poison the global model with updates that aim to weaken the ability of the model to classify accurately. In order to protect against such attacks, the various possible types of attacks must be enumerated, their most probable effects on the model updates identified, and appropriate countermeasures put in place to minimize the likelihood that such updates will be aggregated into the global model while maximizing the likelihood that at least a minimal proportion of legitimate updates will be accepted. In this work we explore these issues by simulating a visual federated learning environment that is being attacked by one or more malicious agents performing two types of targeted attacks, i.e. attacks whose goal is the misclassification of a subset of images while more or less preserving the overall performance of the global model. We implemented a mechanism to detect anomalous model updates and prevent their inclusion in the global model and compared the performance of the global model after training with and without this mechanism enabled.
ResistancePoisoningFederatedMalwareClassifier
Mobile devices contain highly sensitive data, making them an attractive target to attackers. As an Android malware classifier, LiM aims to tackle security issues while respecting the privacy of users by leveraging the power of federated learning. Compared to centralized ways of learning, the unique properties of federated learning open up new attack surfaces for adversaries. For instance, an adversary can attempt to let a targeted malicious app be misclassified as clean by sending poisoned model updates in the federation. This work builds on LiM with the aim of improving its resistance against these poisoning attacks. First, I formulate and test several targeted model update poisoning attacks. Depending on assumptions regarding the adversary's knowledge, the attacks are able to successfully compromise around 10 to 25\% of the honest client devices in the federation. Second, while most defenses result in a trade-off between improving resistance and maintaining performance, I propose a simple defense strategy that can never decrease the performance of the federation. Against a strong adversary, who has knowledge of the algorithm used to aggregate the model updates, the defense was mostly insufficient to prevent poisoning. In the presence of a more realistic adversary, the defense caused LiM to regain best-case performance, comparable to the performance in a scenario without adversary.
DataPoisoning_FL
Code for Data Poisoning Attacks Against Federated Learning Systems
USTC-TK2016
Toolkit for processing PCAP file and transform into image of MNIST dataset
a-neural-algorithm-of-artistic-style
Keras implementation of "A Neural Algorithm of Artistic Style"
PyTorch-Multi-Style-Transfer
Neural Style and MSG-Net
Anomaly-Detection-and-Attack-Identification-in-Network-Traffic-Based-on-Graph
A project from EECS6414M of Winter 2020 at York University
awesome-graph-classification
A collection of important graph embedding, classification and representation learning papers with implementations.
ML_Malware_detect
阿里云安全恶意程序检测比赛
Graph-Neural-Network-Note
A blog for understanding graph neural network
16281284_OS_Lab
OS_lab