Showing posts with label mcmc. Show all posts
Showing posts with label mcmc. Show all posts
Monday, May 10, 2010
Nested Sampling Algorithm (John Skilling)
Nested Sampling was developed by John Skilling (http://www.inference.phy.cam.ac.uk/bayesys/box/nested.pdf // http://ba.stat.cmu.edu/journal/2006/vol01/issue04/skilling.pdf).
Nested Sampling is a modified Markov Chain Monte Carlo algorithm which can be used to explore the posterior probability for the given model. The power of Nested Sampling algorithm lies in the fact that it is designed to compute both the mean posterior probability as well as the Evidence. The algorithm is initialized by randomly taking samples from the prior. The algorithm contracts the distribution of samples around high likelihood regions by discarding the sample with the least likelihood, Lworst.
To keep the number of samples constant, another sample is chosen at random and duplicated. This sample is then randomized by taking Markov chain Monte Carlo steps subject to a hard constraint so that its move is accepted only if the new likelihood is greater than the new threshold, L > Lworst. This ensures that the distribution of samples remains uniformly distributed and that new samples have likelihoods greater than the current likelihood threshold. This process is iterated until the convergence. The logarithm of the evidence is given by the area of the sorted log likelihood as a function of prior mass. When the algorithm has converged one can compute the mean parameter values as well as the log evidence.
For a nice description of Nested Sampling, the book by Sivia and Skilling is highly recommended: Data Analysis: A Bayesian Tutorial.
The codes in C/python/R with an example of light house problem is available at:
http://www.inference.phy.cam.ac.uk/bayesys/
The paper is available at:
http://www.inference.phy.cam.ac.uk/bayesys/nest.ps.gz
Nested Sampling is a modified Markov Chain Monte Carlo algorithm which can be used to explore the posterior probability for the given model. The power of Nested Sampling algorithm lies in the fact that it is designed to compute both the mean posterior probability as well as the Evidence. The algorithm is initialized by randomly taking samples from the prior. The algorithm contracts the distribution of samples around high likelihood regions by discarding the sample with the least likelihood, Lworst.
To keep the number of samples constant, another sample is chosen at random and duplicated. This sample is then randomized by taking Markov chain Monte Carlo steps subject to a hard constraint so that its move is accepted only if the new likelihood is greater than the new threshold, L > Lworst. This ensures that the distribution of samples remains uniformly distributed and that new samples have likelihoods greater than the current likelihood threshold. This process is iterated until the convergence. The logarithm of the evidence is given by the area of the sorted log likelihood as a function of prior mass. When the algorithm has converged one can compute the mean parameter values as well as the log evidence.
For a nice description of Nested Sampling, the book by Sivia and Skilling is highly recommended: Data Analysis: A Bayesian Tutorial.
The codes in C/python/R with an example of light house problem is available at:
http://www.inference.phy.cam.ac.uk/bayesys/
The paper is available at:
http://www.inference.phy.cam.ac.uk/bayesys/nest.ps.gz