There are 3 repositories under model-inversion-attacks topic.
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning. arXiv:2307.09218.
A curated list of resources for model inversion attack (MIA).
A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.
Implementation of the Model Inversion Attack introduced with Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures (Fredrikson Et al.)
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be Careful What You Smooth For".
[ICML 2023] "On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation"
reveal the vulnerabilities of SplitNN
[CVPR-2023] Re-thinking Model Inversion Attacks Against Deep Neural Networks
[ICML 2023] On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation
Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
Bilateral Dependency Optimization: Defending Against Model-inversion Attacks
Research into model inversion on SplitNN
Implementation of "An Approximate Memory based Defense against Model Inversion Attacks to Neural Networks" and "MIDAS: Model Inversion Defenses Using an Approximate Memory System"
[CVPR-2023] Re-thinking Model Inversion Attacks Against Deep Neural Networks
Implementation of the model inversion attack on the Gated-Recurrent-Unit neural network