ترغب بنشر مسار تعليمي؟ اضغط هنا

Tikhonov regularization for polynomial approximation problems in Gauss quadrature points

266   0   0.0 ( 0 )
 نشر من قبل Hao-Ning Wu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper is concerned with the introduction of Tikhonov regularization into least squares approximation scheme on $[-1,1]$ by orthonormal polynomials, in order to handle noisy data. This scheme includes interpolation and hyperinterpolation as special cases. With Gauss quadrature points employed as nodes, coefficients of the approximation polynomial with respect to given basis are derived in an entry-wise closed form. Under interpolatory conditions, the solution to the regularized approximation problem is rewritten in forms of two kinds of barycentric interpolation formulae, by introducing only a multiplicative correction factor into both classical barycentric formulae. An $L_2$ error bound and a uniform error bound are derived, providing similar information that Tikhonov regularization is able to reduce the operator norm (Lebesgue constant) and the error term related to the level of noise, both by multiplying a correction factor which is less than one. Numerical examples show the benefits of Tikhonov regularization when data is noisy or data size is relatively small.


قيم البحث

اقرأ أيضاً

109 - Jiantang Zhang 2021
With the rapid growth of data, how to extract effective information from data is one of the most fundamental problems. In this paper, based on Tikhonov regularization, we propose an effective method for reconstructing the function and its derivative from scattered data with random noise. Since the noise level is not assumed small, we will use the amount of data for reducing the random error, and use a relatively small number of knots for interpolation. An indicator function for our algorithm is constructed. It indicates where the numerical results are good or may not be good. The corresponding error estimates are obtained. We show how to choose the number of interpolation knots in the reconstruction process for balancing the random errors and interpolation errors. Numerical examples show the effectiveness and rapidity of our method. It should be remarked that the algorithm in this paper can be used for on-line data.
A main drawback of classical Tikhonov regularization is that often the parameters required to apply theoretical results, e.g., the smoothness of the sought-after solution and the noise level, are unknown in practice. In this paper we investigate in n ew detail the residuals in Tikhonov regularization viewed as functions of the regularization parameter. We show that the residual carries, with some restrictions, the information on both the unknown solution and the noise level. By calculating approximate solutions for a large range of regularization parameters, we can extract both parameters from the residual given only one set of noisy data and the forward operator. The smoothness in the residual allows to revisit parameter choice rules and relate a-priori, a-posteriori, and heuristic rules in a novel way that blurs the lines between the classical division of the parameter choice rules. All results are accompanied by numerical experiments.
We introduce a fast algorithm for entry-wise evaluation of the Gauss-Newton Hessian (GNH) matrix for the fully-connected feed-forward neural network. The algorithm has a precomputation step and a sampling step. While it generally requires $O(Nn)$ wor k to compute an entry (and the entire column) in the GNH matrix for a neural network with $N$ parameters and $n$ data points, our fast sampling algorithm reduces the cost to $O(n+d/epsilon^2)$ work, where $d$ is the output dimension of the network and $epsilon$ is a prescribed accuracy (independent of $N$). One application of our algorithm is constructing the hierarchical-matrix (H-matrix) approximation of the GNH matrix for solving linear systems and eigenvalue problems. It generally requires $O(N^2)$ memory and $O(N^3)$ work to store and factorize the GNH matrix, respectively. The H-matrix approximation requires only $O(N r_o)$ memory footprint and $O(N r_o^2)$ work to be factorized, where $r_o ll N$ is the maximum rank of off-diagonal blocks in the GNH matrix. We demonstrate the performance of our fast algorithm and the H-matrix approximation on classification and autoencoder neural networks.
84 - Zhengqi Zhang , Zhi Zhou 2021
We aim at the development and analysis of the numerical schemes for approximately solving the backward diffusion-wave problem, which involves a fractional derivative in time with order $alphain(1,2)$. From terminal observations at two time levels, i. e., $u(T_1)$ and $u(T_2)$, we simultaneously recover two initial data $u(0)$ and $u_t(0)$ and hence the solution $u(t)$ for all $t > 0$. First of all, existence, uniqueness and Lipschitz stability of the backward diffusion-wave problem were established under some conditions about $T_1$ and $T_2$. Moreover, for noisy data, we propose a quasi-boundary value scheme to regularize the mildly ill-posed problem, and show the convergence of the regularized solution. Next, to numerically solve the regularized problem, a fully discrete scheme is proposed by applying finite element method in space and convolution quadrature in time. We establish error bounds of the discrete solution in both cases of smooth and nonsmooth data. The error estimate is very useful in practice since it indicates the way to choose discretization parameters and regularization parameter, according to the noise level. The theoretical results are supported by numerical experiments.
Most of the literature on the solution of linear ill-posed operator equations, or their discretization, focuses only on the infinite-dimensional setting or only on the solution of the algebraic linear system of equations obtained by discretization. T his paper discusses the influence of the discretization error on the computed solution. We consider the situation when the discretization used yields an algebraic linear system of equations with a large matrix. An approximate solution of this system is computed by first determining a reduced system of fairly small size by carrying out a few steps of the Arnoldi process. Tikhonov regularization is applied to the reduced problem and the regularization parameter is determined by the discrepancy principle. Errors incurred in each step of the solution process are discussed. Computed examples illustrate the error bounds derived.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا