No Arabic abstract
The inverse problem methodology is a commonly-used framework in the sciences for parameter estimation and inference. It is typically performed by fitting a mathematical model to noisy experimental data. There are two significant sources of error in the process: 1. Noise from the measurement and collection of experimental data and 2. numerical error in approximating the true solution to the mathematical model. Little attention has been paid to how this second source of error alters the results of an inverse problem. As a first step towards a better understanding of this problem, we present a modeling and simulation study using a simple advection-driven PDE model. We present both analytical and computational results concerning how the different sources of error impact the least squares cost function as well as parameter estimation and uncertainty quantification. We investigate residual patterns to derive an autocorrelative statistical model that can improve parameter estimation and confidence interval computation for first order methods. Building on the results of our investigation, we provide guidelines for practitioners to determine when numerical or experimental error is the main source of error in their inference, along with suggestions of how to efficiently improve their results.
We consider a size-structured model for cell division and address the question of determining the division (birth) rate from the measured stable size distribution of the population. We propose a new regularization technique based on a filtering approach. We prove convergence of the algorithm and validate the theoretical results by implementing numerical simulations, based on classical techniques. We compare the results for direct and inverse problems, for the filtering method and for the quasi-reversibility method proposed in [Perthame-Zubelli].
The Alternating Direction Method of Multipliers (ADMM) provides a natural way of solving inverse problems with multiple partial differential equations (PDE) forward models and nonsmooth regularization. ADMM allows splitting these large-scale inverse problems into smaller, simpler sub-problems, for which computationally efficient solvers are available. In particular, we apply large-scale second-order optimization methods to solve the fully-decoupled Tikhonov regularized inverse problems stemming from each PDE forward model. We use fast proximal methods to handle the nonsmooth regularization term. In this work, we discuss several adaptations (such as the choice of the consensus norm) needed to maintain consistency with the underlining infinite-dimensional problem. We present two imaging applications inspired by electrical impedance tomography and quantitative photoacoustic tomography to demonstrate the proposed methods effectiveness.
A convexification-based numerical method for a Coefficient Inverse Problem for a parabolic PDE is presented. The key element of this method is the presence of the so-called Carleman Weight Function in the numerical scheme. Convergence analysis ensures the global convergence of this method, as opposed to the local convergence of the conventional least squares minimization techniques. Numerical results demonstrate a good performance.
For the first time, we develop a convergent numerical method for the llinear integral equation derived by M.M. Lavrentev in 1964 with the goal to solve a coefficient inverse problem for a wave-like equation in 3D. The data are non overdetermined. Convergence analysis is presented along with the numerical results. An intriguing feature of the Lavrentev equation is that, without any linearization, it reduces a highly nonlinear coefficient inverse problem to a linear integral equation of the first kind. Nevertheless, numerical results for that equation, which use the data generated for that coefficient inverse problem, show a good reconstruction accuracy. This is similar with the classical Gelfand-Levitan equation derived in 1951, which is valid in the 1D case.
We consider the numerical analysis of the inchworm Monte Carlo method, which is proposed recently to tackle the numerical sign problem for open quantum systems. We focus on the growth of the numerical error with respect to the simulation time, for which the inchworm Monte Carlo method shows a flatter curve than the direct application of Monte Carlo method to the classical Dyson series. To better understand the underlying mechanism of the inchworm Monte Carlo method, we distinguish two types of exponential error growth, which are known as the numerical sign problem and the error amplification. The former is due to the fast growth of variance in the stochastic method, which can be observed from the Dyson series, and the latter comes from the evolution of the numerical solution. Our analysis demonstrates that the technique of partial resummation can be considered as a tool to balance these two types of error, and the inchwormMonte Carlo method is a successful case where the numerical sign problem is effectively suppressed by such means. We first demonstrate our idea in the context of ordinary differential equations, and then provide complete analysis for the inchworm Monte Carlo method. Several numerical experiments are carried out to verify our theoretical results.