ترغب بنشر مسار تعليمي؟ اضغط هنا

Stochastic equations play an important role in computational science, due to their ability to treat a wide variety of complex statistical problems. However, current algorithms are strongly limited by their sampling variance, which scales proportionat e to 1/N_S for N_S samples. In this paper, we obtain a new class of variance reduction methods for treating stochastic equations, called parallel optimized sampling. The objective of parallel optimized sampling is to reduce the sampling variance in the observables of an ensemble of stochastic trajectories. This is achieved through calculating a finite set of observables - typically statistical moments - in parallel, and minimizing the errors compared to known values. The algorithm is both numerically efficient and unbiased. Importantly, it does not increase the errors in higher order moments, and generally reduces such errors as well. The same procedure is applied both to initial ensembles and to changes in a finite time-step. Results of these methods show that errors in initially optimized moments can be reduced to the machine precision level, typically around 10^(-16) in current hardware. For nonlinear stochastic equations, sampled moment errors during time-evolution are larger than this, due to error propagation effects. Even so, we provide evidence for error reductions of up to two orders of magnitude in a nonlinear equation example, for low order moments, which is a large practical benefit. The sampling variance typically scales as 1/N_S, but with the advantage of a very much smaller prefactor than for standard, non-optimized methods.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا