ترغب بنشر مسار تعليمي؟ اضغط هنا

Numerical analysis for inchworm Monte Carlo method: Sign problem and error growth

139   0   0.0 ( 0 )
 نشر من قبل Zhenning Cai
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the numerical analysis of the inchworm Monte Carlo method, which is proposed recently to tackle the numerical sign problem for open quantum systems. We focus on the growth of the numerical error with respect to the simulation time, for which the inchworm Monte Carlo method shows a flatter curve than the direct application of Monte Carlo method to the classical Dyson series. To better understand the underlying mechanism of the inchworm Monte Carlo method, we distinguish two types of exponential error growth, which are known as the numerical sign problem and the error amplification. The former is due to the fast growth of variance in the stochastic method, which can be observed from the Dyson series, and the latter comes from the evolution of the numerical solution. Our analysis demonstrates that the technique of partial resummation can be considered as a tool to balance these two types of error, and the inchwormMonte Carlo method is a successful case where the numerical sign problem is effectively suppressed by such means. We first demonstrate our idea in the context of ordinary differential equations, and then provide complete analysis for the inchworm Monte Carlo method. Several numerical experiments are carried out to verify our theoretical results.



قيم البحث

اقرأ أيضاً

We investigate in this work a recently proposed diagrammatic quantum Monte Carlo method --- the inchworm Monte Carlo method --- for open quantum systems. We establish its validity rigorously based on resummation of Dyson series. Moreover, we introduc e an integro-differential equation formulation for open quantum systems, which illuminates the mathematical structure of the inchworm algorithm. This new formulation leads to an improvement of the inchworm algorithm by introducing classical deterministic time-integration schemes. The numerical method is validated by applications to the spin-boson model.
This paper provides an a~priori error analysis of a localized orthogonal decomposition method (LOD) for the numerical stochastic homogenization of a model random diffusion problem. If the uniformly elliptic and bounded random coefficient field of the model problem is stationary and satisfies a quantitative decorrelation assumption in form of the spectral gap inequality, then the expected $L^2$ error of the method can be estimated, up to logarithmic factors, by $H+(varepsilon/H)^{d/2}$; $varepsilon$ being the small correlation length of the random coefficient and $H$ the width of the coarse finite element mesh that determines the spatial resolution. The proof bridges recent results of numerical homogenization and quantitative stochastic homogenization.
We present a residual-based a posteriori error estimator for the hybrid high-order (HHO) method for the Stokes model problem. Both the proposed HHO method and error estimator are valid in two and three dimensions and support arbitrary approximation o rders on fairly general meshes. The upper bound and lower bound of the error estimator are proved, in which proof, the key ingredient is a novel stabilizer employed in the discrete scheme. By using the given estimator, adaptive algorithm of HHO method is designed to solve model problem. Finally, the expected theoretical results are numerically demonstrated on a variety of meshes for model problem.
Stochastic PDE eigenvalue problems often arise in the field of uncertainty quantification, whereby one seeks to quantify the uncertainty in an eigenvalue, or its eigenfunction. In this paper we present an efficient multilevel quasi-Monte Carlo (MLQMC ) algorithm for computing the expectation of the smallest eigenvalue of an elliptic eigenvalue problem with stochastic coefficients. Each sample evaluation requires the solution of a PDE eigenvalue problem, and so tackling this problem in practice is notoriously computationally difficult. We speed up the approximation of this expectation in four ways: 1) we use a multilevel variance reduction scheme to spread the work over a hierarchy of FE meshes and truncation dimensions; 2) we use QMC methods to efficiently compute the expectations on each level; 3) we exploit the smoothness in parameter space and reuse the eigenvector from a nearby QMC point to reduce the number of iterations of the eigensolver; and 4) we utilise a two-grid discretisation scheme to obtain the eigenvalue on the fine mesh with a single linear solve. The full error analysis of a basic MLQMC algorithm is given in the companion paper [Gilbert and Scheichl, 2021], and so in this paper we focus on how to further improve the efficiency and provide theoretical justification of the enhancement strategies 3) and 4). Numerical results are presented that show the efficiency of our algorithm, and also show that the four strategies we employ are complementary.
Reaction networks are often used to model interacting species in fields such as biochemistry and ecology. When the counts of the species are sufficiently large, the dynamics of their concentrations are typically modeled via a system of differential e quations. However, when the counts of some species are small, the dynamics of the counts are typically modeled stochastically via a discrete state, continuous time Markov chain. A key quantity of interest for such models is the probability mass function of the process at some fixed time. Since paths of such models are relatively straightforward to simulate, we can estimate the probabilities by constructing an empirical distribution. However, the support of the distribution is often diffuse across a high-dimensional state space, where the dimension is equal to the number of species. Therefore generating an accurate empirical distribution can come with a large computational cost. We present a new Monte Carlo estimator that fundamentally improves on the classical Monte Carlo estimator described above. It also preserves much of classical Monte Carlos simplicity. The idea is basically one of conditional Monte Carlo. Our conditional Monte Carlo estimator has two parameters, and their choice critically affects the performance of the algorithm. Hence, a key contribution of the present work is that we demonstrate how to approximate optimal values for these parameters in an efficient manner. Moreover, we provide a central limit theorem for our estimator, which leads to approximate confidence intervals for its error.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا