Do you want to publish a course? Click here

Dynamic nested sampling: an improved algorithm for parameter estimation and evidence calculation

120   0   0.0 ( 0 )
 Added by Edward Higson
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

We introduce dynamic nested sampling: a generalisation of the nested sampling algorithm in which the number of live points varies to allocate samples more efficiently. In empirical tests the new method significantly improves calculation accuracy compared to standard nested sampling with the same number of samples; this increase in accuracy is equivalent to speeding up the computation by factors of up to ~72 for parameter estimation and ~7 for evidence calculations. We also show that the accuracy of both parameter estimation and evidence calculations can be improved simultaneously. In addition, unlike in standard nested sampling, more accurate results can be obtained by continuing the calculation for longer. Popular standard nested sampling implementations can be easily adapted to perform dynamic nested sampling, and several dynamic nested sampling software packages are now publicly available.



rate research

Read More

133 - Kamran Javid 2019
Metropolis nested sampling evolves a Markov chain from a current livepoint and accepts new points along the chain according to a version of the Metropolis acceptance ratio modified to satisfy the likelihood constraint, characteristic of nested sampling algorithms. The geometric nested sampling algorithm we present here is a based on the Metropolis method, but treats parameters as though they represent points on certain geometric objects, namely circles, tori and spheres. For parameters which represent points on a circle or torus, the trial distribution is `wrapped around the domain of the posterior distribution such that samples cannot be rejected automatically when evaluating the Metropolis ratio due to being outside the sampling domain. Furthermore, this enhances the mobility of the sampler. For parameters which represent coordinates on the surface of a sphere, the algorithm transforms the parameters into a Cartesian coordinate system before sampling which again makes sure no samples are automatically rejected, and provides a physically intutive way of the sampling the parameter space. We apply the geometric nested sampler to two types of toy model which include circular, toroidal and spherical parameters. We find that the geometric nested sampler generally outperforms textsc{MultiNest} in both cases. %We also apply the algorithm to a gravitational wave detection model which includes circular and spherical parameters, and find that the geometric nested sampler and textsc{MultiNest} appear to perform equally well as one another. Our implementation of the algorithm can be found at url{https://github.com/SuperKam91/nested_sampling}.
68 - Johannes Buchner 2017
The data torrent unleashed by current and upcoming astronomical surveys demands scalable analysis methods. Many machine learning approaches scale well, but separating the instrument measurement from the physical effects of interest, dealing with variable errors, and deriving parameter uncertainties is often an after-thought. Classic forward-folding analyses with Markov Chain Monte Carlo or Nested Sampling enable parameter estimation and model comparison, even for complex and slow-to-evaluate physical models. However, these approaches require independent runs for each data set, implying an unfeasible number of model evaluations in the Big Data regime. Here I present a new algorithm, collaborative nested sampling, for deriving parameter probability distributions for each observation. Importantly, the number of physical model evaluations scales sub-linearly with the number of data sets, and no assumptions about homogeneous errors, Gaussianity, the form of the model or heterogeneity/completeness of the observations need to be made. Collaborative nested sampling has immediate application in speeding up analyses of large surveys, integral-field-unit observations, and Monte Carlo simulations.
Sampling errors in nested sampling parameter estimation differ from those in Bayesian evidence calculation, but have been little studied in the literature. This paper provides the first explanation of the two main sources of sampling errors in nested sampling parameter estimation, and presents a new diagrammatic representation for the process. We find no current method can accurately measure the parameter estimation errors of a single nested sampling run, and propose a method for doing so using a new algorithm for dividing nested sampling runs. We empirically verify our conclusions and the accuracy of our new method.
99 - Johannes Buchner 2021
Nested sampling (NS) computes parameter posterior distributions and makes Bayesian model comparison computationally feasible. Its strengths are the unsupervised navigation of complex, potentially multi-modal posteriors until a well-defined termination point. A systematic literature review of nested sampling algorithms and variants is presented. We focus on complete algorithms, including solutions to likelihood-restricted prior sampling, parallelisation, termination and diagnostics. The relation between number of live points, dimensionality and computational cost is studied for two complete algorithms. A new formulation of NS is presented, which casts the parameter space exploration as a search on a tree. Previously published ways of obtaining robust error estimates and dynamic variations of the number of live points are presented as special cases of this formulation. A new on-line diagnostic test is presented based on previous insertion rank order work. The survey of nested sampling methods concludes with outlooks for future research.
Nested sampling is an efficient algorithm for the calculation of the Bayesian evidence and posterior parameter probability distributions. It is based on the step-by-step exploration of the parameter space by Monte Carlo sampling with a series of values sets called live points that evolve towards the region of interest, i.e. where the likelihood function is maximal. In presence of several local likelihood maxima, the algorithm converges with difficulty. Some systematic errors can also be introduced by unexplored parameter volume regions. In order to avoid this, different methods are proposed in the literature for an efficient search of new live points, even in presence of local maxima. Here we present a new solution based on the mean shift cluster recognition method implemented in a random walk search algorithm. The clustering recognition is integrated within the Bayesian analysis program NestedFit. It is tested with the analysis of some difficult cases. Compared to the analysis results without cluster recognition, the computation time is considerably reduced. At the same time, the entire parameter space is efficiently explored, which translates into a smaller uncertainty of the extracted value of the Bayesian evidence.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا