JiangChSo / PFLM

Privacy-preserving federated learning is distributed machine learning where multiple collaborators train a model through protected gradients. To achieve robustness to users dropping out, existing practical privacy-preserving federated learning schemes are based on (t, N)-threshold secret sharing. Such schemes rely on a strong assumption to guarantee security: the threshold t must be greater than half of the number of users. The assumption is so rigorous that in some scenarios the schemes may not be appropriate. Motivated by the issue, we first introduce membership proof for federated learning, which leverages cryptographic accumulators to generate membership proofs by accumulating users IDs. The proofs are issued in a public blockchain for users to verify. With membership proof, we propose a privacy-preserving federated learning scheme called PFLM. PFLM releases the assumption of threshold while maintaining the security guarantees. Additionally, we design a result verification algorithm based on a variant of ElGamal encryption to verify the correctness of aggregated results from the cloud server. The verification algorithm is integrated into PFLM as a part. Security analysis in a random oracle model shows that PFLM guarantees privacy against active adversaries. The implementation of PFLM and experiments demonstrate the performance of PFLM in terms of computation and communication.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

PFLM

This repository is for our Information Sciences 2021 paper "PFLM: Privacy-preserving Federated Learning with Membership Proof". Detailed instructions are described as follows.

Install the required packages

virtualenv -p /usr/bin/python3 venv
source venv/bin/activate
pip install -r requirements

Install the packages pypbc

  1. Adjust the appropriate gradient dimensions NB_CLASSES in client.py and server.py.

  2. Adjust the dropout in server.py for experimental purposes. For example, 10 represents 10 percent of users dropping out of PFLM.

  3. Adjust the timeouts of the five rounds in server.py to the appropriate RTT (Following the two steps above).

  4. Execute the sh file. For example,

bash nodrop.sh

Note that the data recorded in the experiment is saved in BENCHMARK. Note that the figures are in the folder Plot.

About

Privacy-preserving federated learning is distributed machine learning where multiple collaborators train a model through protected gradients. To achieve robustness to users dropping out, existing practical privacy-preserving federated learning schemes are based on (t, N)-threshold secret sharing. Such schemes rely on a strong assumption to guarantee security: the threshold t must be greater than half of the number of users. The assumption is so rigorous that in some scenarios the schemes may not be appropriate. Motivated by the issue, we first introduce membership proof for federated learning, which leverages cryptographic accumulators to generate membership proofs by accumulating users IDs. The proofs are issued in a public blockchain for users to verify. With membership proof, we propose a privacy-preserving federated learning scheme called PFLM. PFLM releases the assumption of threshold while maintaining the security guarantees. Additionally, we design a result verification algorithm based on a variant of ElGamal encryption to verify the correctness of aggregated results from the cloud server. The verification algorithm is integrated into PFLM as a part. Security analysis in a random oracle model shows that PFLM guarantees privacy against active adversaries. The implementation of PFLM and experiments demonstrate the performance of PFLM in terms of computation and communication.


Languages

Language:Jupyter Notebook 63.6%Language:Python 32.9%Language:Shell 3.5%