There are 9 repositories under uncertainty-neural-networks topic.
Pytorch implementations of Bayes By Backprop, MC Dropout, SGLD, the Local Reparametrization Trick, KF-Laplace, SG-HMC and more
A library for Bayesian neural network layers and uncertainty estimation in Deep Learning extending the core of PyTorch
This repository contains a collection of surveys, datasets, papers, and codes, for predictive uncertainty estimation in deep learning models.
This repo contains a PyTorch implementation of the paper: "Evidential Deep Learning to Quantify Classification Uncertainty"
This repository provides the code used to implement the framework to provide deep learning models with total uncertainty estimates as described in "A General Framework for Uncertainty Estimation in Deep Learning" (Loquercio, SegĂą, Scaramuzza. RA-L 2020).
Observations and notes to understand the workings of neural network models and other thought experiments using Tensorflow
To Trust Or Not To Trust A Classifier. A measure of uncertainty for any trained (possibly black-box) classifier which is more effective than the classifier's own implied confidence (e.g. softmax probability for a neural network).
My implementation of the paper "Simple and Scalable Predictive Uncertainty estimation using Deep Ensembles"
Official repository for the paper "Masksembles for Uncertainty Estimation" (CVPR2021).
Code for "Depth Uncertainty in Neural Networks" (https://arxiv.org/abs/2006.08437)
A list of papers on Active Learning and Uncertainty Estimation for Neural Networks.
A pytorch implementation of MCDO(Monte-Carlo Dropout methods)
Model zoo for different kinds of uncertainty quantification methods used in Natural Language Processing, implemented in PyTorch.
Uncertainty-Wizard is a plugin on top of tensorflow.keras, allowing to easily and efficiently create uncertainty-aware deep neural networks. Also useful if you want to train multiple small models in parallel.
Inferring distributions over depth from a single image, IROS 2019
A primer on Bayesian Neural Networks. The aim of this reading list is to facilitate the entry of new researchers into the field of Bayesian Deep Learning, by providing an overview of key papers. More details: "A Primer on Bayesian Neural Networks: Review and Debates"
Implementation of the MNIST experiment for Monte Carlo Dropout from http://mlg.eng.cam.ac.uk/yarin/PDFs/NIPS_2015_bayesian_convnets.pdf
[WACV'22] Official implementation of "HHP-Net: A light Heteroscedastic neural network for Head Pose estimation with uncertainty"
Benchmarking uncertainty quantification methods on proteins.
A project to train your model from scratch or fine-tune a pretrained model using the losses provided in this library to improve out-of-distribution detection and uncertainty estimation performances. Calibrate your model to produce enhanced uncertainty estimations. Detect out-of-distribution data using the defined score type and threshold.
PyTorch implementation of Probabilistic MIMO U-Net
NOMU: Neural Optimization-based Model Uncertainty
NeurIPS paper 'Censored Quantile Regression Neural Networks for Distribution-Free Survival Analysis'
Code and supporting materials for the ICLR 2020 RIO paper
Experiments from Efficient Training of Interval Neural Networks for Imprecise Training Data
ML framework to estimate Bayesian posteriors of galaxy morphological parameters
Probabilistic load forecasting with Reservoir Computing
The second-moment loss (SML) is a novel training objective for dropout-based regression networks that yields improved uncertainty estimates.