Code to implement experiments from Divide-and-Conquer Monte Carlo Fusion by Ryan S.Y. Chan, Adam M. Johansen, Murray Pollock and Gareth O. Roberts.
Note: package has been renamed to DCFusion
but the repo is still called hierarchicalFusion
for now since that is what the current arxiv and submitted version has linked to. This will change when this gets updated.
Simply run: devtools::install_github('rchan26/hierarchicalFusion')
The experiments were ran on Microsoft Azure using Data Science Virtual Machine's (DSVM) with either 16 core (Section 4) or 64 core machines (Section 5). The code utilises parallel computing (via the base parallel
package) and by default uses all the cores available on the machine. To change this, modify the n_cores
variable in the functions which perform the methodology (this is set to parallel::detectCores()
by default).
- Section 4.1: varying_rho_replicates.R
- Section 4.2: varying_C_experiments_uniG_smc_replicates.R
- Section 4:3: separate_modes_smc.R and separate_modes_with_tempering.R
- Section 5.1: logistic_regression/simulated_data/
- Section 5.2: logistic_regression/credit_card/
- Divide-and-Conquer Monte Carlo Fusion
- Monte Carlo Fusion
- Bayesian Fusion: Scalable unification of distributed statistical analyses
The package is still in development and I'm currently in the process of implementing the Bayesian Fusion algorithm along with a new Generalised Bayesian Fusion algorithm.
This work is licensed under a Creative Commons Attribution 4.0 International License.