ترغب بنشر مسار تعليمي؟ اضغط هنا

Parallel optimized sampling for stochastic equations

22   0   0.0 ( 0 )
 نشر من قبل Bogdan Opanchuk
 تاريخ النشر 2015
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Stochastic equations play an important role in computational science, due to their ability to treat a wide variety of complex statistical problems. However, current algorithms are strongly limited by their sampling variance, which scales proportionate to 1/N_S for N_S samples. In this paper, we obtain a new class of variance reduction methods for treating stochastic equations, called parallel optimized sampling. The objective of parallel optimized sampling is to reduce the sampling variance in the observables of an ensemble of stochastic trajectories. This is achieved through calculating a finite set of observables - typically statistical moments - in parallel, and minimizing the errors compared to known values. The algorithm is both numerically efficient and unbiased. Importantly, it does not increase the errors in higher order moments, and generally reduces such errors as well. The same procedure is applied both to initial ensembles and to changes in a finite time-step. Results of these methods show that errors in initially optimized moments can be reduced to the machine precision level, typically around 10^(-16) in current hardware. For nonlinear stochastic equations, sampled moment errors during time-evolution are larger than this, due to error propagation effects. Even so, we provide evidence for error reductions of up to two orders of magnitude in a nonlinear equation example, for low order moments, which is a large practical benefit. The sampling variance typically scales as 1/N_S, but with the advantage of a very much smaller prefactor than for standard, non-optimized methods.



قيم البحث

اقرأ أيضاً

374 - Emmanuel Audusse 2009
In this article we are interested in the derivation of efficient domain decomposition methods for the viscous primitive equations of the ocean. We consider the rotating 3d incompressible hydrostatic Navier-Stokes equations with free surface. Performi ng an asymptotic analysis of the system with respect to the Rossby number, we compute an approximated Dirichlet to Neumann operator and build an optimized Schwarz waveform relaxation algorithm. We establish the well-posedness of this algorithm and present some numerical results to illustrate the method.
We develop in this work a numerical method for stochastic differential equations (SDEs) with weak second order accuracy based on Gaussian mixture. Unlike the conventional higher order schemes for SDEs based on It^o-Taylor expansion and iterated It^o integrals, the proposed scheme approximates the probability measure $mu(X^{n+1}|X^n=x_n)$ by a mixture of Gaussians. The solution at next time step $X^{n+1}$ is then drawn from the Gaussian mixture with complexity linear in the dimension $d$. This provides a new general strategy to construct efficient high weak order numerical schemes for SDEs.
We study discrete-time simulation schemes for stochastic Volterra equations, namely the Euler and Milstein schemes, and the corresponding Multi-Level Monte-Carlo method. By using and adapting some results from Zhang [22], together with the Garsia-Rod emich-Rumsey lemma, we obtain the convergence rates of the Euler scheme and Milstein scheme under the supremum norm. We then apply these schemes to approximate the expectation of functionals of such Volterra equations by the (Multi-Level) Monte-Carlo method, and compute their complexity.
This article presents explicit exponential integrators for stochastic Maxwells equations driven by both multiplicative and additive noises. By utilizing the regularity estimate of the mild solution, we first prove that the strong order of the numeric al approximation is $frac 12$ for general multiplicative noise. Combing a proper decomposition with the stochastic Fubinis theorem, the strong order of the proposed scheme is shown to be $1$ for additive noise. Moreover, for linear stochastic Maxwells equation with additive noise, the proposed time integrator is shown to preserve exactly the symplectic structure, the evolution of the energy as well as the evolution of the divergence in the sense of expectation. Several numerical experiments are presented in order to verify our theoretical findings.
76 - Yaxian Xu , Ajay Jasra , 2018
In this paper we consider sequential joint state and static parameter estimation given discrete time observations associated to a partially observed stochastic partial differential equation (SPDE). It is assumed that one can only estimate the hidden state using a discretization of the model. In this context, it is known that the multi-index Monte Carlo (MIMC) method of [11] can be used to improve over direct Monte Carlo from the most precise discretizaton. However, in the context of interest, it cannot be directly applied, but rather must be used within another advanced method such as sequential Monte Carlo (SMC). We show how one can use the MIMC method by renormalizing the MI identity and approximating the resulting identity using the SMC$^2$ method of [5]. We prove that our approach can reduce the cost to obtain a given mean square error (MSE), relative to just using SMC$^2$ on the most precise discretization. We demonstrate this with some numerical examples.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا