There are 3 repositories under membership-inference-attack topic.
Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.
A curated list of trustworthy deep learning papers. Daily updating...
[ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu
[NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu
Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.
RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models
Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)
reveal the vulnerabilities of SplitNN
đź”’ Implementation of Shokri et al(2016) "Membership Inference Attacks against Machine Learning Models"
Min-K%++: Improved baseline for detecting pre-training data of LLMs https://arxiv.org/abs/2404.02936
FederBoost's Federated Gradient Boosting Decision Tree Algorithm, Federated enabled Membership Inference
Bachelor's Thesis on Membership Inference Attacks
The source code for ICML2021 paper When Does Data Augmentation Help With Membership Inference Attacks?
Membership inference against Federated learning.
Accompanying code for "Disparate Vulnerability to Membership Inference Attacks"
An implementation of loss thresholding attack to infer membership status as described in paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting" (CSF 18) in PyTorch.
Implementations on Security and Privacy in ML; Evasion Attack, Model Stealing, Model Poisoning, Membership Inference Attacks, ...
Universität des Saarlandes - Privacy Enhancing Technologies 2021 - Semester Project
Performing membership inference attack (MIA) against Korean language models (LMs).
DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
Testing membership inference attacks on Deep learning models (LSTM, CNN);
Source code for our IJCAI-ECAI 2022 paper "To Trust or Not To Trust Prediction Scores for Membership Inference Attacks"
Defending Privacy Against More Knowledgeable Membership Inference Attackers
Privacy in Practice: Private COVID-19 Detection in X-Ray Images
This repository accompanies the paper "SynthShield: Leveraging Synthetic Distributions to Enhance Privacy Against Membership Inference" currently under review at the International Conference on Pattern Recognition (ICPR). It contains the main code used in applying and analysing the SynthShield technique analysed in the paper.
Investigating the privacy vulnerabilities in deep learning steganography using the membership inference attacks.