lacerbi / infbench

Benchmark of posterior and model inference algorithms for (moderately) expensive likelihoods.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Introduction

This repository contains code for the inference benchmarks and results reported in several papers on Variational Bayesian Monte Carlo (VBMC) [1,2,3].

The Variational Bayesian Monte Carlo (VBMC) algorithm is available at this repository.

Inference benchmark (infbench)

The goal of infbench is to compare various sample-efficient approximate inference algorithms which have been proposed in the machine learning literature so as to deal with (moderately) expensive and potentially noisy likelihoods. In particular, we want to infer the posterior over model parameters and (an approximation of) the model evidence or marginal likelihood, that is the normalization constant of the posterior. Crucially, we assume that the budget is of the order of up to several hundreds likelihood evaluations.

Notably, this goal is more ambitious than simply finding the maximum of the posterior (MAP), problem that we previously tackled with Bayesian Adaptive Direct Search (aka BADS).

Our first benchmark shows that existing inference algorithms perform quite poorly at reconstructing posteriors (or evaluating their normalization constant) with both syntethic and real pdfs that have moderately challenging features, showing that this is a much harder problem [1,2].

Our second extensive benchmark shows that the latest version of VBMC (v1.0, June 2020) beats other state-of-the-art methods on real computational problems when dealing with noisy log-likelihood evaluations, such as those arising from simulation-based estimation techniques [3].

How to run the original benchmark (vbmc18)

You can run the benchmark on one test problem in the vbmc18 problem set as follows:

> options = struct('SpeedTests',false);
> [probstruct,history] = infbench_run('vbmc18',testfun,D,[],algo,id,options);

The arguments are:

  • testfun (string) is the test pdf, which can be 'cigar', 'lumpy', 'studentt' (synthetic test functions with different properties), or 'goris2015' (real model fitting problem with neuronal data, see here).
  • D (integer) is the dimensionality of the problem for the synthetic test functions (typical values are from D=2 to D=10), and 7 or 8 for the goris2015 test set (corresponding to two different neuron datasets).
  • algo (string) is the inference algorithm being tested. Specific settings for the chosen inference algorithm are selected with 'algo@setting', e.g. 'agp@debug'.
  • id (integer) is the run id, used to set the random seed. It can be an array, such as id=1:5, for multiple consecutive runs.
  • options (struct) sets various options for the benchmark. For a fast test, I recommend to set the field SpeedTests to false since initial speed tests can be quite time consuming.

The outputs are:

  • probstruct (struct) describes the current problem in detail.
  • history (struct array) summary statistics of the run for each id.

How to run the extensive benchmark (vbmc20)

The vbm20 benchmark includes a number of real, challenging models and data largely from computational and cognitive neuroscience, from D = 3 to D = 9. The benchmark is mostly designed to test methods that deal with noisy log-likelihood evaluations.

You can run the benchmark on one test problem in the vbmc20 problem set as follows:

> options = struct('SpeedTests',false);
> [probstruct,history] = infbench_run('vbmc20',testfun,D,noise,algo,id,options);

The arguments are:

  • testfun (string) indicates the tested model, which can be 'wood2010' (Ricker), 'krajbich2010' (aDDM), 'acerbi2012' (Timing), 'acerbidokka2018' (Multisensory), 'goris2015b' (Neuronal), 'akrami2018b' (Rodent). The additional model presented in the Supplement is 'price2018' (g-and-k).
  • D (integer) is the dataset, with D = 1 for Ricker, Timing, Rodent, g-and-k; D = 1 or D = 2 for aDDM and Multisensory; D = 107 or D = 108 for Neuronal. For all problems (except Neuronal), add 100 to D to obtain a noiseless version of the problem.
  • algo (string) is the inference algorithm being tested. For the algorithms tested in the paper [3], use 'vbmc@imiqr', 'vbmc@viqr', 'vbmc@npro', 'vbmc@eig', 'parallelgp@v3', 'wsabiplus@ldet'.
  • noise (double) added Gaussian noise to the log-likelihood. Leave this empty (noise = []) for all problems except for Neuronal, for which in the benchmark we used noise = 2 (all other problems in vbmc20 are intrinsically noisy).

For the other input and output arguments, see above.

Code used to generate figures in the paper [3] is available in this folder. However, you will first need to run the benchmark (due to space limitations, we cannot upload the bulk of the numerical results here).

References

  1. Acerbi, L. (2018). Variational Bayesian Monte Carlo. In Advances in Neural Information Processing Systems 31: 8222-8232. (paper + supplement on arXiv, NeurIPS Proceedings)
  2. Acerbi, L. (2019). An Exploration of Acquisition and Mean Functions in Variational Bayesian Monte Carlo. In Proc. Machine Learning Research 96: 1-10. 1st Symposium on Advances in Approximate Bayesian Inference, Montréal, Canada. (paper in PMLR)
  3. Acerbi, L. (2020). Variational Bayesian Monte Carlo with Noisy Likelihoods. To appear in Advances in Neural Information Processing Systems 33. arXiv preprint arXiv:2006.08655 (preprint on arXiv).

Contact

This repository is currently actively used for research, stay tuned for updates:

  • Follow me on Twitter for updates about my work on model inference and other projects I am involved with;
  • If you have questions or comments about this work, get in touch at luigi.acerbi@helsinki.fi (putting 'infbench' in the subject of the email).

License

The inference benchmark is released under the terms of the GNU General Public License v3.0.

About

Benchmark of posterior and model inference algorithms for (moderately) expensive likelihoods.

License:GNU General Public License v3.0


Languages

Language:MATLAB 84.2%Language:Fortran 5.7%Language:TeX 3.3%Language:HTML 2.9%Language:C 1.3%Language:C++ 1.0%Language:Shell 0.6%Language:TypeScript 0.4%Language:XSLT 0.2%Language:Roff 0.1%Language:Makefile 0.1%Language:Mathematica 0.1%Language:Python 0.0%Language:M 0.0%Language:CSS 0.0%Language:Java 0.0%Language:Limbo 0.0%