yangarbiter / dp-dg

What You See is What You Get: Distributional Generalization for Algorithm Design in Deep Learning

Home Page:https://arxiv.org/abs/2204.03230

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

What You See is What You Get: Distributional Generalization for Algorithm Design in Deep Learning

This repository contains the code of the experiments in the paper

What You See is What You Get: Distributional Generalization for Algorithm Design in Deep Learning

Authors: Bogdan Kulynych, Yao-Yuan Yang, Yaodong Yu, Jarosław Błasiok, Preetum Nakkiran

To appear in NeurIPS 2022

Abstract

We investigate and leverage a connection between Differential Privacy (DP) and the recently proposed notion of Distributional Generalization (DG). Applying this connection, we introduce new conceptual tools for designing deep-learning methods that bypass "pathologies" of standard stochastic gradient descent (SGD). First, we prove that differentially private methods satisfy a "What You See Is What You Get (WYSIWYG)" generalization guarantee: whatever a model does on its train data is almost exactly what it will do at test time. This guarantee is formally captured by distributional generalization. WYSIWYG enables principled algorithm design in deep learning by reducing generalization concerns to optimization ones: in order to mitigate unwanted behavior at test time, it is provably sufficient to mitigate this behavior on the train data. This is notably false for standard (non-DP) methods, hence this observation has applications even when privacy is not required. For example, importance sampling is known to fail for standard SGD, but we show that it has exactly the intended effect for DP-trained models. Thus, with DP-SGD, unlike with SGD, we can influence test-time behavior by making principled train-time interventions. We use these insights to construct simple algorithms which match or outperform SOTA in several distributional robustness applications, and to significantly improve the privacy vs. disparate impact trade-off of DP-SGD. Finally, we also improve on known theoretical bounds relating differential privacy, stability, and distributional generalization.


Setup

Installation

Required packages and their versions are in requirements.txt.

Dataset

Datasets are implemented in wilds/datasets/archive/.

Entry point

examples/run_expt.py

Set --root_dir=./data (if you follow the dataset setup in the previous section).

Algorithms

  • noisy gradient

Enable the --apply_noise flag.

A sample command:

python examples/run_expt.py --algorithm ERM --optimizer SGD \
  --batch_size $BATCHSIZE
  • DP-SGD

To run DP-IS-SGD, add the --enable_privacy flag and set DP related arguments.

python examples/run_expt.py --algorithm ERM --optimizer SGD \
  --enable_privacy --sample_rate 0.0001 --delta 1e-5 \
  --max_per_sample_grad_norm 0.1
  • DP-IS-SGD

To run DP-IS-SGD, add the --weighted_uniform_iid and --enable_privacy flag, and set DP related arguments.

python examples/run_expt.py --algorithm ERM --optimizer SGD \
  --weighted_uniform_iid --enable_privacy --sample_rate 0.0001 --delta 1e-5 \
  --max_per_sample_grad_norm 0.1
  • Importance Sampling SGD (IS-SGD)

To run DP-IS-SGD, add the --uniform_over_groups flag.

python examples/run_expt.py --algorithm ERM --optimizer SGD \
  --uniform_over_groups --batch_size $BATCHSIZE

sample command:

python examples/run_expt.py --optimizer SGD --algorithm IWERM \
  --batch_size $BATCHSIZE

To run groupDRO, set --algorithm groupDRO.

python examples/run_expt.py --optimizer SGD --algorithm groupDRO \
  --batch_size $BATCHSIZE

Set --algorithm ERMDPSGDf, --enable_fair_privacy, and other privacy related arguments.

python examples/run_expt.py --optimizer SGD --algorithm ERMDPSGDf \
  --max_per_sample_grad_norm $CLIPNORM --enable_fair_privacy \
  --batch_size $BATCHSIZE --delta 1e-5 --sigma ${SIGMA} \
  --uniform_iid --sample_rate $SAMPLERATE --C0 ${CLIPNORM} --sigma2 ${SIGMA2}

Experiment Scripts

Citation

Code in this repository is modified from https://github.com/p-lambda/wilds.

For more experimental and technical details, please check our paper. If any of our proposed algorithms or implementation is helpful, please cite:

@article{kulynych2022you,
  title={What You See is What You Get: Distributional Generalization for Algorithm Design in Deep Learning},
  author={Kulynych, Bogdan and Yang, Yao-Yuan and Yu, Yaodong and B{\l}asiok, Jaros{\l}aw and Nakkiran, Preetum},
  journal={arXiv preprint arXiv:2204.03230},
  year={2022}
}

About

What You See is What You Get: Distributional Generalization for Algorithm Design in Deep Learning

https://arxiv.org/abs/2204.03230

License:MIT License


Languages

Language:Jupyter Notebook 59.0%Language:Python 35.8%Language:Shell 5.2%