There are 21 repositories under privacy-preserving-machine-learning topic.
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
A curated list of awesome responsible machine learning resources.
Everything about federated learning, including research papers, books, codes, tutorials, videos and beyond
A Privacy-Preserving Framework Based on TensorFlow
Privacy Testing for Deep Learning
Toolkit for building machine learning models that generalize to unseen domains and are robust to privacy and other attacks.
Implementation of protocols in SecureNN.
Fast, memory-efficient, scalable optimization of deep learning with differential privacy
Implementation of protocols in Falcon
Full stack service enabling decentralized machine learning on private data
This is the research repository for Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition.
Privacy Preserving Convolutional Neural Network using Homomorphic Encryption for secure inference
GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation (USENIX Security '23)
This repository contains all the implementation of different papers on Federated Learning
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be Careful What You Smooth For".
Privacy-Preserving Machine Learning (PPML) Tutorial
A library for statistically estimating the privacy of ML pipelines from membership inference attacks
Privacy-preserving federated learning is distributed machine learning where multiple collaborators train a model through protected gradients. To achieve robustness to users dropping out, existing practical privacy-preserving federated learning schemes are based on (t, N)-threshold secret sharing. Such schemes rely on a strong assumption to guarantee security: the threshold t must be greater than half of the number of users. The assumption is so rigorous that in some scenarios the schemes may not be appropriate. Motivated by the issue, we first introduce membership proof for federated learning, which leverages cryptographic accumulators to generate membership proofs by accumulating users IDs. The proofs are issued in a public blockchain for users to verify. With membership proof, we propose a privacy-preserving federated learning scheme called PFLM. PFLM releases the assumption of threshold while maintaining the security guarantees. Additionally, we design a result verification algorithm based on a variant of ElGamal encryption to verify the correctness of aggregated results from the cloud server. The verification algorithm is integrated into PFLM as a part. Security analysis in a random oracle model shows that PFLM guarantees privacy against active adversaries. The implementation of PFLM and experiments demonstrate the performance of PFLM in terms of computation and communication.
Implementation of local differential privacy mechanisms in Python language.
[KDD 2022] "Bilateral Dependency Optimization: Defending Against Model-inversion Attacks"
Similarity Guided Model Aggregation for Federated Learning
Privacy-Preserving Bandits (MLSys'20)
📊 Privacy Preserving Medical Data Analytics using Secure Multi Party Computation. An End-To-End Use Case. A. Giannopoulos, D. Mouris M.Sc. thesis at the University of Athens, Greece.
Differential Privacy Guide
Tricks for Accelerating (encrypted) Prediction As a Service
A crypto-assisted framework for protecting the privacy of models and queries in inference.
Source Code for the JAIR Paper "Does CLIP Know my Face?" (Demo: https://huggingface.co/spaces/AIML-TUDA/does-clip-know-my-face)
[ECCV 2022] Official pytorch implementation of the paper "FedVLN: Privacy-preserving Federated Vision-and-Language Navigation"
Privacy-Preserving Verifiable Neural Network Inference Service