No Arabic abstract
A main drawback of classical Tikhonov regularization is that often the parameters required to apply theoretical results, e.g., the smoothness of the sought-after solution and the noise level, are unknown in practice. In this paper we investigate in new detail the residuals in Tikhonov regularization viewed as functions of the regularization parameter. We show that the residual carries, with some restrictions, the information on both the unknown solution and the noise level. By calculating approximate solutions for a large range of regularization parameters, we can extract both parameters from the residual given only one set of noisy data and the forward operator. The smoothness in the residual allows to revisit parameter choice rules and relate a-priori, a-posteriori, and heuristic rules in a novel way that blurs the lines between the classical division of the parameter choice rules. All results are accompanied by numerical experiments.
With the rapid growth of data, how to extract effective information from data is one of the most fundamental problems. In this paper, based on Tikhonov regularization, we propose an effective method for reconstructing the function and its derivative from scattered data with random noise. Since the noise level is not assumed small, we will use the amount of data for reducing the random error, and use a relatively small number of knots for interpolation. An indicator function for our algorithm is constructed. It indicates where the numerical results are good or may not be good. The corresponding error estimates are obtained. We show how to choose the number of interpolation knots in the reconstruction process for balancing the random errors and interpolation errors. Numerical examples show the effectiveness and rapidity of our method. It should be remarked that the algorithm in this paper can be used for on-line data.
This paper is concerned with the introduction of Tikhonov regularization into least squares approximation scheme on $[-1,1]$ by orthonormal polynomials, in order to handle noisy data. This scheme includes interpolation and hyperinterpolation as special cases. With Gauss quadrature points employed as nodes, coefficients of the approximation polynomial with respect to given basis are derived in an entry-wise closed form. Under interpolatory conditions, the solution to the regularized approximation problem is rewritten in forms of two kinds of barycentric interpolation formulae, by introducing only a multiplicative correction factor into both classical barycentric formulae. An $L_2$ error bound and a uniform error bound are derived, providing similar information that Tikhonov regularization is able to reduce the operator norm (Lebesgue constant) and the error term related to the level of noise, both by multiplying a correction factor which is less than one. Numerical examples show the benefits of Tikhonov regularization when data is noisy or data size is relatively small.
Many problems in fluid dynamics are effectively modeled as Stokes flows - slow, viscous flows where the Reynolds number is small. Boundary integral equations are often used to solve these problems, where the fundamental solutions for the fluid velocity are the Stokeslet and stresslet. One of the main challenges in evaluating the boundary integrals is that the kernels become singular on the surface. A regularization method that eliminates the singularities and reduces the numerical error through correction terms for both the Stokeslet and stresslet integrals was developed in Tlupova and Beale, JCP (2019). In this work we build on the previously developed method to introduce a new stresslet regularization that is simpler and results in higher accuracy when evaluated on the surface. Our regularization replaces a seventh-degree polynomial that results from an equation with two conditions and two unknowns with a fifth-degree polynomial that results from an equation with one condition and one unknown. Numerical experiments demonstrate that the new regularization retains the same order of convergence as the regularization developed by Tlupova and Beale but shows a decreased magnitude of the error.
In this paper, we consider the minimization of a Tikhonov functional with an $ell_1$ penalty for solving linear inverse problems with sparsity constraints. One of the many approaches used to solve this problem uses the Nemskii operator to transform the Tikhonov functional into one with an $ell_2$ penalty term but a nonlinear operator. The transformed problem can then be analyzed and minimized using standard methods. However, by the nature of this transform, the resulting functional is only once continuously differentiable, which prohibits the use of second order methods. Hence, in this paper, we propose a different transformation, which leads to a twice differentiable functional that can now be minimized using efficient second order methods like Newtons method. We provide a convergence analysis of our proposed scheme, as well as a number of numerical results showing the usefulness of our proposed approach.
Most of the literature on the solution of linear ill-posed operator equations, or their discretization, focuses only on the infinite-dimensional setting or only on the solution of the algebraic linear system of equations obtained by discretization. This paper discusses the influence of the discretization error on the computed solution. We consider the situation when the discretization used yields an algebraic linear system of equations with a large matrix. An approximate solution of this system is computed by first determining a reduced system of fairly small size by carrying out a few steps of the Arnoldi process. Tikhonov regularization is applied to the reduced problem and the regularization parameter is determined by the discrepancy principle. Errors incurred in each step of the solution process are discussed. Computed examples illustrate the error bounds derived.