There are 19 repositories under privacy-preserving-machine-learning topic.
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
A curated list of awesome responsible machine learning resources.
Everything about federated learning, including research papers, books, codes, tutorials, videos and beyond
An open framework for Federated Learning.
A Privacy-Preserving Framework Based on TensorFlow
Privacy Testing for Deep Learning
Implementation of protocols in SecureNN.
Implementation of protocols in Falcon
Fast, memory-efficient, scalable optimization of deep learning with differential privacy
Full stack service enabling decentralized machine learning on private data
This repository contains all the implementation of different papers on Federated Learning
This is the research repository for Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition.
Privacy-Preserving Machine Learning (PPML) Tutorial
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be Careful What You Smooth For".
A library for statistically estimating the privacy of ML pipelines from membership inference attacks
Implementation of local differential privacy mechanisms in Python language.
Privacy-preserving federated learning is distributed machine learning where multiple collaborators train a model through protected gradients. To achieve robustness to users dropping out, existing practical privacy-preserving federated learning schemes are based on (t, N)-threshold secret sharing. Such schemes rely on a strong assumption to guarantee security: the threshold t must be greater than half of the number of users. The assumption is so rigorous that in some scenarios the schemes may not be appropriate. Motivated by the issue, we first introduce membership proof for federated learning, which leverages cryptographic accumulators to generate membership proofs by accumulating users IDs. The proofs are issued in a public blockchain for users to verify. With membership proof, we propose a privacy-preserving federated learning scheme called PFLM. PFLM releases the assumption of threshold while maintaining the security guarantees. Additionally, we design a result verification algorithm based on a variant of ElGamal encryption to verify the correctness of aggregated results from the cloud server. The verification algorithm is integrated into PFLM as a part. Security analysis in a random oracle model shows that PFLM guarantees privacy against active adversaries. The implementation of PFLM and experiments demonstrate the performance of PFLM in terms of computation and communication.
Similarity Guided Model Aggregation for Federated Learning
Privacy-Preserving Bandits (MLSys'20)
Bilateral Dependency Optimization: Defending Against Model-inversion Attacks
📊 Privacy Preserving Medical Data Analytics using Secure Multi Party Computation. An End-To-End Use Case. A. Giannopoulos, D. Mouris M.Sc. thesis at the University of Athens, Greece.
Tricks for Accelerating (encrypted) Prediction As a Service
A crypto-assisted framework for protecting the privacy of models and queries in inference.
Differential Privacy Guide
Crypto-Convolutional Neural Network library written on top of SEAL 2.3.1
[ECCV 2022] Official pytorch implementation of the paper "FedVLN: Privacy-preserving Federated Vision-and-Language Navigation"