yuxiangw / autodp

autodp: A flexible and easy-to-use package for differential privacy

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

difference between eps using this method and abadi

srxzr opened this issue · comments

Using the implementation of Abadi et al computes smaller eps compare to this method. I would appreciate your opinion about it. Is their method tighter ?

https://github.com/tensorflow/privacy/tree/master/tutorials

Hi Milad,
Thanks for your comment.
Could you provide a bit more context in the form of a minimally working example?

Short answers: AutoDP allows privacy amplification by choosing a random subset. Both autodp and tf.privacy have implemented the version for poisson sampling (including each data point with a fixed probability iid). For poisson sampling, the state-of-the-art calculation (at least for Gaussian mechanism) is a bit tighter than the "random subset" calculation. Hope that explains.

The bottomline is that:
If your algorithm is actually doing a random subset with a fixed cardinality, then using the Poisson sampling bounds to account for privacy would be the incorrect thing to do. You should make sure that your algorithm and how you track your privacy losses is consistent.