There are 3 repositories under membership-inference-attack topic.
Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.
A curated list of trustworthy deep learning papers. Daily updating...
[ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu
[NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu
Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.
Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)
reveal the vulnerabilities of SplitNN
đź”’ Implementation of Shokri et al(2016) "Membership Inference Attacks against Machine Learning Models"
Min-K%++: Improved baseline for detecting pre-training data of LLMs https://arxiv.org/abs/2404.02936
Bachelor's Thesis on Membership Inference Attacks
FederBoost's Federated Gradient Boosting Decision Tree Algorithm, Federated enabled Membership Inference
The source code for ICML2021 paper When Does Data Augmentation Help With Membership Inference Attacks?
Accompanying code for "Disparate Vulnerability to Membership Inference Attacks"
An implementation of loss thresholding attack to infer membership status as described in paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting" (CSF 18) in PyTorch.
Membership inference against Federated learning.
Implementations on Security and Privacy in ML; Evasion Attack, Model Stealing, Model Poisoning, Membership Inference Attacks, ...
Performing membership inference attack (MIA) against Korean language models (LMs).
DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
Universität des Saarlandes - Privacy Enhancing Technologies 2021 - Semester Project
Testing membership inference attacks on Deep learning models (LSTM, CNN);
Source code for our IJCAI-ECAI 2022 paper "To Trust or Not To Trust Prediction Scores for Membership Inference Attacks"
Defending Privacy Against More Knowledgeable Membership Inference Attackers
Privacy in Practice: Private COVID-19 Detection in X-Ray Images
This repository accompanies the paper "SynthShield: Leveraging Synthetic Distributions to Enhance Privacy Against Membership Inference" currently under review at the International Conference on Pattern Recognition (ICPR). It contains the main code used in applying and analysing the SynthShield technique analysed in the paper.
A mitigation method against privacy violation attacks on face recognition systems
An implementation of ICLR 22 paper "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" in PyTorch
Evaluating the impact of entropy, maximum posterior probability, and standard deviation of probability vector in mitigating black-box membership inference attack