Do you want to publish a course? Click here

Analysis of a Helmholtz preconditioning problem motivated by uncertainty quantification

119   0   0.0 ( 0 )
 Added by Euan Spence
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

This paper analyses the following question: let $mathbf{A}_j$, $j=1,2,$ be the Galerkin matrices corresponding to finite-element discretisations of the exterior Dirichlet problem for the heterogeneous Helmholtz equations $ ablacdot (A_j abla u_j) + k^2 n_j u_j= -f$. How small must $|A_1 -A_2|_{L^q}$ and $|{n_1} - {n_2}|_{L^q}$ be (in terms of $k$-dependence) for GMRES applied to either $(mathbf{A}_1)^{-1}mathbf{A}_2$ or $mathbf{A}_2(mathbf{A}_1)^{-1}$ to converge in a $k$-independent number of iterations for arbitrarily large $k$? (In other words, for $mathbf{A}_1$ to be a good left- or right-preconditioner for $mathbf{A}_2$?). We prove results answering this question, give theoretical evidence for their sharpness, and give numerical experiments supporting the estimates. Our motivation for tackling this question comes from calculating quantities of interest for the Helmholtz equation with random coefficients $A$ and $n$. Such a calculation may require the solution of many deterministic Helmholtz problems, each with different $A$ and $n$, and the answer to the question above dictates to what extent a previously-calculated inverse of one of the Galerkin matrices can be used as a preconditioner for other Galerkin matrices.



rate research

Read More

Classical a posteriori error analysis for differential equations quantifies the error in a Quantity of Interest (QoI) which is represented as a bounded linear functional of the solution. In this work we consider a posteriori error estimates of a quantity of interest that cannot be represented in this fashion, namely the time at which a threshold is crossed for the first time. We derive two representations for such errors and use an adjoint-based a posteriori approach to estimate unknown terms that appear in our representation. The first representation is based on linearizations using Taylors Theorem. The second representation is obtained by implementing standard root-finding techniques. We provide several examples which demonstrate the accuracy of the methods. We then embed these error estimates within a framework to provide error bounds on a cumulative distribution function when parameters of the differential equations are uncertain.
In this paper, based on a domain decomposition (DD) method, we shall propose an efficient two-level preconditioned Helmholtz-Jacobi-Davidson (PHJD) method for solving the algebraic eigenvalue problem resulting from the edge element approximation of the Maxwell eigenvalue problem. In order to eliminate the components in orthogonal complement space of the eigenvalue, we shall solve a parallel preconditioned system and a Helmholtz projection system together in fine space. After one coarse space correction in each iteration and minimizing the Rayleigh quotient in a small dimensional Davidson space, we finally get the error reduction of this two-level PHJD method as $gamma=c(H)(1-Cfrac{delta^{2}}{H^{2}})$, where $C$ is a constant independent of the mesh size $h$ and the diameter of subdomains $H$, $delta$ is the overlapping size among the subdomains, and $c(H)$ decreasing as $Hto 0$, which means the greater the number of subdomains, the better the convergence rate. Numerical results supporting our theory shall be given.
We propose a novel $hp$-multilevel Monte Carlo method for the quantification of uncertainties in the compressible Navier-Stokes equations, using the Discontinuous Galerkin method as deterministic solver. The multilevel approach exploits hierarchies of uniformly refined meshes while simultaneously increasing the polynomial degree of the ansatz space. It allows for a very large range of resolutions in the physical space and thus an efficient decrease of the statistical error. We prove that the overall complexity of the $hp$-multilevel Monte Carlo method to compute the mean field with prescribed accuracy is, in best-case, of quadratic order with respect to the accuracy. We also propose a novel and simple approach to estimate a lower confidence bound for the optimal number of samples per level, which helps to prevent overestimating these quantities. The method is in particular designed for application on queue-based computing systems, where it is desirable to compute a large number of samples during one iteration, without overestimating the optimal number of samples. Our theoretical results are verified by numerical experiments for the two-dimensional compressible Navier-Stokes equations. In particular we consider a cavity flow problem from computational acoustics, demonstrating that the method is suitable to handle complex engineering problems.
In this work, we describe a Bayesian framework for the X-ray computed tomography (CT) problem in an infinite-dimensional setting. We consider reconstructing piecewise smooth fields with discontinuities where the interface between regions is not known. Furthermore, we quantify the uncertainty in the prediction. Directly detecting the discontinuities, instead of reconstructing the entire image, drastically reduces the dimension of the problem. Therefore, the posterior distribution can be approximated with a relatively small number of samples. We show that our method provides an excellent platform for challenging X-ray CT scenarios (e.g. in case of noisy data, limited angle, or sparse angle imaging). We investigate the accuracy and the efficiency of our method on synthetic data. Furthermore, we apply the method to the real-world data, tomographic X-ray data of a lotus root filled with attenuating objects. The numerical results indicate that our method provides an accurate method in detecting boundaries between piecewise smooth regions and quantifies the uncertainty in the prediction, in the context of X-ray CT.
Motivated by the desire to numerically calculate rigorous upper and lower bounds on deviation probabilities over large classes of probability distributions, we present an adaptive algorithm for the reconstruction of increasing real-valued functions. While this problem is similar to the classical statistical problem of isotonic regression, the optimisation setting alters several characteristics of the problem and opens natural algorithmic possibilities. We present our algorithm, establish sufficient conditions for convergence of the reconstruction to the ground truth, and apply the method to synthetic test cases and a real-world example of uncertainty quantification for aerodynamic design.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا