tillahoffmann / summaries2

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Minimizing the Expected Posterior Entropy Yields Optimal Summaries

This repository contains code and data to reproduce the results presented in the manuscript Minimizing the Expected Posterior Entropy Yields Optimal Summaries.

Figures and tables can be regenerated by executing the following steps:

  • Ensure a recent Python version is installed; this code has been tested with Python 3.10 on Ubuntu and macOS.
  • Optionally, create a new virtual environment.
  • Install the Python requirements by executing pip install -r requirements.txt from the root directory of the repository.
  • Install CmdStan by executing python -m cmdstanpy.install_cmdstan --version 2.31.0. Other recent versions of CmdStan may also work but have not been tested.
  • Optionally, verify the installation by executing pytest -v.
  • Execute cook exec "*:evaluation" which will run all experiments and generate evaluation metrics which are saved at workspace/[experiment name]/evaluation.csv.
  • Execute each of the Jupyter notebooks (saved as markdown files) in the notebooks folder to generate the figures.

Results Structure

After running the experiments (see above), the workspace folder contains all results. It is structured as follows, and the folder structure is repeated for each experiment.

benchmark-large  # One folder for each experiment.
    data  # Train, validation, and test split as pickle files; other temp files may also be present.
        test.pkl
        train.pkl
        validation.pkl
        ...
    samples  # (Approximate) posterior samples as pickle files.
        [sampler configuration name].pkl
        ...
    transformers  # Trained transformers, e.g., posterior mean estimators, as pickle files.
        [transformer configuration name]-[digits].pkl  # One of three replications with diff. seeds.
        [transformer configuration name].pkl  # Best transformer amongst the three replications.
    evaluation.csv  # Evaluation of different summary statistic extraction methods.
benchmark-small
    ...
coalescent
    ...
tree-large
    ...
tree-large
    ...
figures  # Contains PDF figures after executing notebooks.

Each evaluation.csv file has seven columns:

  • path which refers to one of the methods used to extract summaries.
  • three columns {nlp,rmise,mise} which are best estimates of negative log probability loss, root mean integrated squared error, and mean integrated squared error, respectively. The estimates are obtained by averaging over all samples in the corresponding test set.
  • three columns {nlp,rmise,mise}_err which are standard errors obtained as sqrt(var / (n - 1)), where var is the variance of the metric in the test set, and n is the size of the test set.

About

License:BSD 3-Clause "New" or "Revised" License


Languages

Language:Python 99.3%Language:Stan 0.7%