zhang's starred repositories

ml-visuals

🎨 ML Visuals contains figures and templates which you can reuse and customize to improve your scientific writing.

NativeOverleaf

Next-level academia! Repository for the Native Overleaf project, attempting to integrate Overleaf with native OS features for macOS, Linux and Windows.

Language:JavaScriptLicense:GPL-3.0Stargazers:360Issues:7Issues:32

Dense-Deep-Reinforcement-Learning

This repo contains the code for paper "Dense reinforcement learning for safety validation of autonomous vehicles"

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:305Issues:4Issues:7

membership-inference

Code for the paper: Label-Only Membership Inference Attacks

DP-AGD

Concentrated Differentially Private Gradient Descent with Adaptive per-iteration Privacy Budget

Language:PythonLicense:MITStargazers:35Issues:2Issues:5

Focused-Flip-Federated-Backdoor-Attack

Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning

Federated_learning_with_differential_privacy

Differential priavcy based federated learning framework by various neural networks and svm using PyTorch.

Language:PythonStargazers:29Issues:0Issues:0

Data-free_Backdoor

This is the source code for Data-free Backdoor. Our paper is accepted by the 32nd USENIX Security Symposium (USENIX Security 2023).

Language:PythonLicense:MITStargazers:28Issues:0Issues:0

Multi-metrics

Multi-metrics adaptively identifies backdoors in Federated learning

Language:PythonStargazers:22Issues:1Issues:0

Awesome-Federated-Learning-for-Autonomous-Driving

FedML for Autonomous Driving (AD), Intelligent Transportation Systems (ITS), Connected and Automated Vehicles (CAV)

FedRec

[AAAI 2023] Official PyTorch implementation for "Untargeted Attack against Federated Recommendation Systems via Poisonous Item Embeddings and the Defense"

Language:PythonStargazers:19Issues:0Issues:0

unlearning-verification

verifying machine unlearning by backdooring

KENKU

KENKU: Towards Efficient and Stealthy Black-box Adversarial Attacks against ASR Systems

Language:PythonLicense:MITStargazers:12Issues:0Issues:0
Language:PythonStargazers:12Issues:0Issues:0

EludingSecureAggregation

Eluding Secure Aggregation in Federated Learning via Model Inconsistency

Language:Jupyter NotebookStargazers:11Issues:1Issues:0
Language:PythonLicense:MITStargazers:11Issues:0Issues:0

GradDefense

Defense against Gradient Leakage Attack

Language:Jupyter NotebookLicense:MITStargazers:9Issues:2Issues:0

CleanSheet

Code and full version of the paper "Hijacking Attacks against Neural Network by Analyzing Training Data"

Language:PythonStargazers:8Issues:0Issues:0
Language:PythonStargazers:7Issues:0Issues:0
Language:PythonStargazers:7Issues:0Issues:0
Language:PythonLicense:MITStargazers:6Issues:0Issues:0

CNN-prediction-ZKP-scheme

The code corresponds to the paper “Validating the integrity of Convolutional Neural Network predictions based on Zero-Knowledge Proof“

Language:C++Stargazers:4Issues:2Issues:0

2023-TIFS-DTIBA

Invisible backdoor attack with dynamic triggers against person re-identification (IEEE T-IFS 2023)

Language:PythonStargazers:4Issues:2Issues:0
Language:PythonStargazers:2Issues:0Issues:0