Privacy guarantees of of privacy amplification by iteration example
tudorcebere opened this issue · comments
Hi!
First, thanks for this excellent library and for publishing research experiments!
I have questions about the privacy amplification by iteration script. Could the authors provide a clear explanation of the following:
- Which theorem are they using for privacy accounting?
- How was the theorem implemented in tensorflow privacy?
As far as I understand from this file (but please correct me if I am wrong), TF Privacy is computing a average over clipped gradients, and then noise has a scale of sensitivity * noise_multiplier. So the updates rule is
Where
That's how we can observe a RDP coefficient of:
Now, this is neat, but I am not sure this is comparable with the analysis of DP-SGD from here, as they are considering an update rule of:
For them to be comparable, shouldn't we scale