ترغب بنشر مسار تعليمي؟ اضغط هنا

Multilevel Hierarchical Decomposition of Finite Element White Noise with Application to Multilevel Markov Chain Monte Carlo

77   0   0.0 ( 0 )
 نشر من قبل Hillary Fairbanks
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this work we develop a new hierarchical multilevel approach to generate Gaussian random field realizations in an algorithmically scalable manner that is well-suited to incorporate into multilevel Markov chain Monte Carlo (MCMC) algorithms. This approach builds off of other partial differential equation (PDE) approaches for generating Gaussian random field realizations; in particular, a single field realization may be formed by solving a reaction-diffusion PDE with a spatial white noise source function as the righthand side. While these approaches have been explored to accelerate forward uncertainty quantification tasks, e.g. multilevel Monte Carlo, the previous constructions are not directly applicable to multilevel MCMC frameworks which build fine scale random fields in a hierarchical fashion from coarse scale random fields. Our new hierarchical multilevel method relies on a hierarchical decomposition of the white noise source function in $L^2$ which allows us to form Gaussian random field realizations across multiple levels of discretization in a way that fits into multilevel MCMC algorithmic frameworks. After presenting our main theoretical results and numerical scaling results to showcase the utility of this new hierarchical PDE method for generating Gaussian random field realizations, this method is tested on a four-level MCMC algorithm to explore its feasibility.

قيم البحث

اقرأ أيضاً

We consider the problem of estimating the probability of a large loss from a financial portfolio, where the future loss is expressed as a conditional expectation. Since the conditional expectation is intractable in most cases, one may resort to neste d simulation. To reduce the complexity of nested simulation, we present a method that combines multilevel Monte Carlo (MLMC) and quasi-Monte Carlo (QMC). In the outer simulation, we use Monte Carlo to generate financial scenarios. In the inner simulation, we use QMC to estimate the portfolio loss in each scenario. We prove that using QMC can accelerate the convergence rates in both the crude nested simulation and the multilevel nested simulation. Under certain conditions, the complexity of MLMC can be reduced to $O(epsilon^{-2}(log epsilon)^2)$ by incorporating QMC. On the other hand, we find that MLMC encounters catastrophic coupling problem due to the existence of indicator functions. To remedy this, we propose a smoothed MLMC method which uses logistic sigmoid functions to approximate indicator functions. Numerical results show that the optimal complexity $O(epsilon^{-2})$ is almost attained when using QMC methods in both MLMC and smoothed MLMC, even in moderate high dimensions.
We propose a novel $hp$-multilevel Monte Carlo method for the quantification of uncertainties in the compressible Navier-Stokes equations, using the Discontinuous Galerkin method as deterministic solver. The multilevel approach exploits hierarchies o f uniformly refined meshes while simultaneously increasing the polynomial degree of the ansatz space. It allows for a very large range of resolutions in the physical space and thus an efficient decrease of the statistical error. We prove that the overall complexity of the $hp$-multilevel Monte Carlo method to compute the mean field with prescribed accuracy is, in best-case, of quadratic order with respect to the accuracy. We also propose a novel and simple approach to estimate a lower confidence bound for the optimal number of samples per level, which helps to prevent overestimating these quantities. The method is in particular designed for application on queue-based computing systems, where it is desirable to compute a large number of samples during one iteration, without overestimating the optimal number of samples. Our theoretical results are verified by numerical experiments for the two-dimensional compressible Navier-Stokes equations. In particular we consider a cavity flow problem from computational acoustics, demonstrating that the method is suitable to handle complex engineering problems.
We present a novel approach aimed at high-performance uncertainty quantification for time-dependent problems governed by partial differential equations. In particular, we consider input uncertainties described by a Karhunen-Loeeve expansion and compu te statistics of high-dimensional quantities-of-interest, such as the cardiac activation potential. Our methodology relies on a close integration of multilevel Monte Carlo methods, parallel iterative solvers, and a space-time discretization. This combination allows for space-time adaptivity, time-changing domains, and to take advantage of past samples to initialize the space-time solution. The resulting sequence of problems is distributed using a multilevel parallelization strategy, allocating batches of samples having different sizes to a different number of processors. We assess the performance of the proposed framework by showing in detail its application to the solution of nonlinear equations arising from cardiac electrophysiology. Specifically, we study the effect of spatially-correlated perturbations of the heart fibers conductivities on the mean and variance of the resulting activation map. As shown by the experiments, the theoretical rates of convergence of multilevel Monte Carlo are achieved. Moreover, the total computational work for a prescribed accuracy is reduced by an order of magnitude with respect to standard Monte Carlo methods.
161 - Ajay Jasra , Kody J. H. Law , 2021
This position paper summarizes a recently developed research program focused on inference in the context of data centric science and engineering applications, and forecasts its trajectory forward over the next decade. Often one endeavours in this con text to learn complex systems in order to make more informed predictions and high stakes decisions under uncertainty. Some key challenges which must be met in this context are robustness, generalizability, and interpretability. The Bayesian framework addresses these three challenges, while bringing with it a fourth, undesirable feature: it is typically far more expensive than its deterministic counterparts. In the 21st century, and increasingly over the past decade, a growing number of methods have emerged which allow one to leverage cheap low-fidelity models in order to precondition algorithms for performing inference with more expensive models and make Bayesian inference tractable in the context of high-dimensional and expensive models. Notable examples are multilevel Monte Carlo (MLMC), multi-index Monte Carlo (MIMC), and their randomized counterparts (rMLMC), which are able to provably achieve a dimension-independent (including $infty-$dimension) canonical complexity rate with respect to mean squared error (MSE) of $1/$MSE. Some parallelizability is typically lost in an inference context, but recently this has been largely recovered via novel double randomization approaches. Such an approach delivers i.i.d. samples of quantities of interest which are unbiased with respect to the infinite resolution target distribution. Over the coming decade, this family of algorithms has the potential to transform data centric science and engineering, as well as classical machine learning applications such as deep learning, by scaling up and scaling out fully Bayesian inference.
Stochastic PDE eigenvalue problems often arise in the field of uncertainty quantification, whereby one seeks to quantify the uncertainty in an eigenvalue, or its eigenfunction. In this paper we present an efficient multilevel quasi-Monte Carlo (MLQMC ) algorithm for computing the expectation of the smallest eigenvalue of an elliptic eigenvalue problem with stochastic coefficients. Each sample evaluation requires the solution of a PDE eigenvalue problem, and so tackling this problem in practice is notoriously computationally difficult. We speed up the approximation of this expectation in four ways: 1) we use a multilevel variance reduction scheme to spread the work over a hierarchy of FE meshes and truncation dimensions; 2) we use QMC methods to efficiently compute the expectations on each level; 3) we exploit the smoothness in parameter space and reuse the eigenvector from a nearby QMC point to reduce the number of iterations of the eigensolver; and 4) we utilise a two-grid discretisation scheme to obtain the eigenvalue on the fine mesh with a single linear solve. The full error analysis of a basic MLQMC algorithm is given in the companion paper [Gilbert and Scheichl, 2021], and so in this paper we focus on how to further improve the efficiency and provide theoretical justification of the enhancement strategies 3) and 4). Numerical results are presented that show the efficiency of our algorithm, and also show that the four strategies we employ are complementary.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا