Do you want to publish a course? Click here

Dark energy and cosmic curvature: Monte-Carlo Markov Chain approach

120   0   0.0 ( 0 )
 Added by Yungui Gong
 Publication date 2008
  fields Physics
and research's language is English




Ask ChatGPT about the research

We use the Monte-Carlo Markov Chain method to explore the dark energy property and the cosmic curvature by fitting two popular dark energy parameterizations to the observational data. The new 182 gold supernova Ia data and the ESSENCE data both give good constraint on the DE parameters and the cosmic curvature for the dark energy model $w_0+w_a z/(1+z)$. The cosmic curvature is found to be $|Omega_k|la 0.03$. For the dark energy model $w_0+w_a z/(1+z)^2$, the ESSENCE data gives better constraint on the cosmic curvature and we get $|Omega_k|leq 0.02$.



rate research

Read More

An important task in machine learning and statistics is the approximation of a probability measure by an empirical measure supported on a discrete point set. Stein Points are a class of algorithms for this task, which proceed by sequentially minimising a Stein discrepancy between the empirical measure and the target and, hence, require the solution of a non-convex optimisation problem to obtain each new point. This paper removes the need to solve this optimisation problem by, instead, selecting each new point based on a Markov chain sample path. This significantly reduces the computational cost of Stein Points and leads to a suite of algorithms that are straightforward to implement. The new algorithms are illustrated on a set of challenging Bayesian inference problems, and rigorous theoretical guarantees of consistency are established.
We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a PMCMC method based on an interacting pool of standard and conditional sequential Monte Carlo samplers. Like related methods, iPMCMC is a Markov chain Monte Carlo sampler on an extended space. We present empirical results that show significant improvements in mixing rates relative to both non-interacting PMCMC samplers, and a single PMCMC sampler with an equivalent memory and computational budget. An additional advantage of the iPMCMC method is that it is suitable for distributed and multi-core architectures.
A novel class of non-reversible Markov chain Monte Carlo schemes relying on continuous-time piecewise-deterministic Markov Processes has recently emerged. In these algorithms, the state of the Markov process evolves according to a deterministic dynamics which is modified using a Markov transition kernel at random event times. These methods enjoy remarkable features including the ability to update only a subset of the state components while other components implicitly keep evolving and the ability to use an unbiased estimate of the gradient of the log-target while preserving the target as invariant distribution. However, they also suffer from important limitations. The deterministic dynamics used so far do not exploit the structure of the target. Moreover, exact simulation of the event times is feasible for an important yet restricted class of problems and, even when it is, it is application specific. This limits the applicability of these techniques and prevents the development of a generic software implementation of them. We introduce novel MCMC methods addressing these shortcomings. In particular, we introduce novel continuous-time algorithms relying on exact Hamiltonian flows and novel non-reversible discrete-time algorithms which can exploit complex dynamics such as approximate Hamiltonian dynamics arising from symplectic integrators while preserving the attractive features of continuous-time algorithms. We demonstrate the performance of these schemes on a variety of applications.
We propose a minimal generalization of the celebrated Markov-Chain Monte Carlo algorithm which allows for an arbitrary number of configurations to be visited at every Monte Carlo step. This is advantageous when a parallel computing machine is available, or when many biased configurations can be evaluated at little additional computational cost. As an example of the former case, we report a significant reduction of the thermalization time for the paradigmatic Sherrington-Kirkpatrick spin-glass model. For the latter case, we show that, by leveraging on the exponential number of biased configurations automatically computed by Diagrammatic Monte Carlo, we can speed up computations in the Fermi-Hubbard model by two orders of magnitude.
Boson sampling is a promising candidate for quantum supremacy. It requires to sample from a complicated distribution, and is trusted to be intractable on classical computers. Among the various classical sampling methods, the Markov chain Monte Carlo method is an important approach to the simulation and validation of boson sampling. This method however suffers from the severe sample loss issue caused by the autocorrelation of the sample sequence. Addressing this, we propose the sample caching Markov chain Monte Carlo method that eliminates the correlations among the samples, and prevents the sample loss at the meantime, allowing more efficient simulation of boson sampling. Moreover, our method can be used as a general sampling framework that can benefit a wide range of sampling tasks, and is particularly suitable for applications where a large number of samples are taken.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا