orientino / dum-components

Implementation of "Training, Architecture, and Prior for Deterministic Uncertainty Methods" ICLR 2023 Workshop on Trustworthy ML

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Training, Architecture, and Prior for Deterministic Uncertainty Methods

This work is the repository for the ICLR 2023 workshop paper Training, Architecture, and Prior for Deterministic Uncertainty Methods.

Abstract: Accurate and efficient uncertainty estimation is crucial to build reliable Machine Learning (ML) models capable to provide calibrated uncertainty estimates, generalize and detect Out-Of-Distribution (OOD) datasets. To this end, Deterministic Uncertainty Methods (DUMs) is a promising model family capable to perform uncertainty estimation in a single forward pass. This work investigates important design choices in DUMs: (1) we show that training schemes decoupling the core architecture and the uncertainty head schemes can significantly improve uncertainty performances. (2) we demonstrate that the core architecture expressiveness is crucial for uncertainty performance and that additional architecture constraints to avoid feature collapse can deteriorate the trade-off between OOD generalization and detection. (3) Contrary to other Bayesian models, we show that the prior defined by DUMs do not have a strong effect on the final performances.

Install

conda env create -n dum --file environment.yml
python setup.py develop

Run

For simple but complete examples, run a notebook in notebook/run_*.ipynb. From these notebooks, you can change the dataset and the DUM's hyperparameters to run more complex tasks.

Citation

If you use the code in this repository, consider citing our work:

@misc{dums-components,
  title = {Training, Architecture, and Prior for Deterministic Uncertainty Methods},
  author = {Charpentier, Bertrand and Zhang, Chenxiang and Günnemann, Stephan},
  publisher = {ICLR Workshop on Pitfalls of limited data and computation for Trustworthy ML},
  year = {2023},
}

About

Implementation of "Training, Architecture, and Prior for Deterministic Uncertainty Methods" ICLR 2023 Workshop on Trustworthy ML

License:MIT License


Languages

Language:Python 98.1%Language:Jupyter Notebook 1.9%