Abstract
Markov chain sampling methods that adapt to characteristics of the
distribution being sampled can be constructed using the principle
that one can ample from a distribution by sampling uniformly from
the region under the plot of its density function. A Markov chain
that converges to this uniform distribution can be constructed by
alternating uniform sampling in the vertical direction with uniform
sampling from the horizontal "slice" defined by the current vertical
position, or more generally, with some update that leaves the uniform
distribution over this slice invariant. Such "slice sampling" methods
are easily implemented for univariate distributions, and can be used
to sample from a multivariate distribution by updating each variable
in turn. This approach is often easier to implement than Gibbs sampling
and more efficient than simple Metropolis updates, due to the ability
of slice sampling to adaptively choose the magnitude of changes made.
It is therefore attractive for routine and automated use. Slice sampling
methods that update all variables simultaneously are also possible.
These methods can adaptively choose the magnitudes of changes made
to each variable, based on the local properties of the density function.
More ambitiously, such methods could potentially adapt to the dependencies
between variables by constructing local quadratic approximations.
Another approach is to improve sampling efficiency by suppressing
random walks. This can be done for univariate slice sampling by överrelaxation,"
and for multivariate slice sampling by "reflection" from the edges
of the slice.
Users
Please
log in to take part in the discussion (add own reviews or comments).