No Arabic abstract
In this work, we describe a Bayesian framework for the X-ray computed tomography (CT) problem in an infinite-dimensional setting. We consider reconstructing piecewise smooth fields with discontinuities where the interface between regions is not known. Furthermore, we quantify the uncertainty in the prediction. Directly detecting the discontinuities, instead of reconstructing the entire image, drastically reduces the dimension of the problem. Therefore, the posterior distribution can be approximated with a relatively small number of samples. We show that our method provides an excellent platform for challenging X-ray CT scenarios (e.g. in case of noisy data, limited angle, or sparse angle imaging). We investigate the accuracy and the efficiency of our method on synthetic data. Furthermore, we apply the method to the real-world data, tomographic X-ray data of a lotus root filled with attenuating objects. The numerical results indicate that our method provides an accurate method in detecting boundaries between piecewise smooth regions and quantifies the uncertainty in the prediction, in the context of X-ray CT.
This paper analyses the following question: let $mathbf{A}_j$, $j=1,2,$ be the Galerkin matrices corresponding to finite-element discretisations of the exterior Dirichlet problem for the heterogeneous Helmholtz equations $ ablacdot (A_j abla u_j) + k^2 n_j u_j= -f$. How small must $|A_1 -A_2|_{L^q}$ and $|{n_1} - {n_2}|_{L^q}$ be (in terms of $k$-dependence) for GMRES applied to either $(mathbf{A}_1)^{-1}mathbf{A}_2$ or $mathbf{A}_2(mathbf{A}_1)^{-1}$ to converge in a $k$-independent number of iterations for arbitrarily large $k$? (In other words, for $mathbf{A}_1$ to be a good left- or right-preconditioner for $mathbf{A}_2$?). We prove results answering this question, give theoretical evidence for their sharpness, and give numerical experiments supporting the estimates. Our motivation for tackling this question comes from calculating quantities of interest for the Helmholtz equation with random coefficients $A$ and $n$. Such a calculation may require the solution of many deterministic Helmholtz problems, each with different $A$ and $n$, and the answer to the question above dictates to what extent a previously-calculated inverse of one of the Galerkin matrices can be used as a preconditioner for other Galerkin matrices.
We propose a novel $hp$-multilevel Monte Carlo method for the quantification of uncertainties in the compressible Navier-Stokes equations, using the Discontinuous Galerkin method as deterministic solver. The multilevel approach exploits hierarchies of uniformly refined meshes while simultaneously increasing the polynomial degree of the ansatz space. It allows for a very large range of resolutions in the physical space and thus an efficient decrease of the statistical error. We prove that the overall complexity of the $hp$-multilevel Monte Carlo method to compute the mean field with prescribed accuracy is, in best-case, of quadratic order with respect to the accuracy. We also propose a novel and simple approach to estimate a lower confidence bound for the optimal number of samples per level, which helps to prevent overestimating these quantities. The method is in particular designed for application on queue-based computing systems, where it is desirable to compute a large number of samples during one iteration, without overestimating the optimal number of samples. Our theoretical results are verified by numerical experiments for the two-dimensional compressible Navier-Stokes equations. In particular we consider a cavity flow problem from computational acoustics, demonstrating that the method is suitable to handle complex engineering problems.
Motivated by the desire to numerically calculate rigorous upper and lower bounds on deviation probabilities over large classes of probability distributions, we present an adaptive algorithm for the reconstruction of increasing real-valued functions. While this problem is similar to the classical statistical problem of isotonic regression, the optimisation setting alters several characteristics of the problem and opens natural algorithmic possibilities. We present our algorithm, establish sufficient conditions for convergence of the reconstruction to the ground truth, and apply the method to synthetic test cases and a real-world example of uncertainty quantification for aerodynamic design.
Classical a posteriori error analysis for differential equations quantifies the error in a Quantity of Interest (QoI) which is represented as a bounded linear functional of the solution. In this work we consider a posteriori error estimates of a quantity of interest that cannot be represented in this fashion, namely the time at which a threshold is crossed for the first time. We derive two representations for such errors and use an adjoint-based a posteriori approach to estimate unknown terms that appear in our representation. The first representation is based on linearizations using Taylors Theorem. The second representation is obtained by implementing standard root-finding techniques. We provide several examples which demonstrate the accuracy of the methods. We then embed these error estimates within a framework to provide error bounds on a cumulative distribution function when parameters of the differential equations are uncertain.
This paper presents a comparison of two methods for the forward uncertainty quantification (UQ) of complex industrial problems. Specifically, the performance of Multi-Index Stochastic Collocation (MISC) and adaptive multi-fidelity Stochastic Radial Basis Functions (SRBF) surrogates is assessed for the UQ of a roll-on/roll-off passengers ferry advancing in calm water and subject to two operational uncertainties, namely the ship speed and draught. The estimation of expected value, standard deviation, and probability density function of the (model-scale) resistance is presented and discussed; the required simulations are obtained by the in-house unsteady multi-grid Reynolds Averaged Navier-Stokes (RANS) solver $chi$navis. Both MISC and SRBF use as multi-fidelity levels the evaluations on the different grid levels intrinsically employed by the RANS solver for multi-grid acceleration; four grid levels are used here, obtained as isotropic coarsening of the initial finest mesh. The results suggest that MISC could be preferred when only limited data sets are available. For larger data sets both MISC and SRBF represent a valid option, with a slight preference for SRBF, due to its robustness to noise.