Parallel Multiple Proposal MCMC Algorithms
Abstract
We explore the variance reduction achievable through parallel implementation of multi-proposal MCMC algorithms and use of control variates. Implemented sequentially multi-proposal MCMC algorithms are of limited value, but they are very well suited for parallelization. Further, discarding the rejected states in an MCMC sampler can intuitively be interpreted as a waste of information. This becomes even more true for a multi-proposal algorithm where we discard several states in each iteration. By creating an alternative estimator consisting of a linear combination of the traditional sample mean and zero mean random variables called control variates we can improve on the traditional estimator. We present a setting for the multi-proposal MCMC algorithm and study it in two examples. The first example considers sampling from a simple Gaussian distribution, while for the second we design the framework for a multi-proposal mode jumping algorithm for sampling from a distribution with several separated modes. We find that the variance reduction achieved from our control variate estimator in general increases as the number of proposals in our sampler increase. For our Gaussian example we find that the benefit from parallelization is small, and that little is gained from increasing the number of proposals. The mode jumping example however is very well suited for parallelization and we get a relative variance reduction pr time of roughly 80% with 16 proposals in each iteration.