tensorflow / privacy

Library for training machine learning models with privacy for training data

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Federated DPFTRL and Adaptive Clipping noise injection.

BucketHeadP65 opened this issue · comments

Dear developers of the privacy framework. I was checking the implementation of quantile_adaptive_clip_tree_query.py

I can see that a new tree is required to inject noise after aggregating the norm bit as the traditional QuantileEstimatorQuery
is wrapped from the TreeQuantileEstimatorQuery

In this paper Federated Learning of Gboard Language Models with Differential Privacy I see the paragraph highlighted in the image below.
Question: Does this mean that I can just keep the logic of restarting the tree and then updating the clipping norm of the DPFTRL skipping the second tree that injects noise on norm bits and just replace the whole noise of Federated DP-FTRL with Adaptive clipping setup with just a Tree based Gaussian noise with Z as stated in the figure? I am confused here.

Thanks you in advance.

image

Thanks for your interest. The noise multiplier is slightly inflated when using adaptive clipping as suggested in the theory. This can be done once similar to https://github.com/tensorflow/federated/blob/main/tensorflow_federated/python/aggregators/differential_privacy.py#L400