Do you want to publish a course? Click here

Function Approximation via The Subsampled Poincar e Inequality

131   0   0.0 ( 0 )
 Added by Yifan Chen
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Function approximation and recovery via some sampled data have long been studied in a wide array of applied mathematics and statistics fields. Analytic tools, such as the Poincare inequality, have been handy for estimating the approximation errors in different scales. The purpose of this paper is to study a generalized Poincar e inequality, where the measurement function is of subsampled type, with a small but non-zero lengthscale that will be made precise. Our analysis identifies this inequality as a basic tool for function recovery problems. We discuss and demonstrate the optimality of the inequality concerning the subsampled lengthscale, connecting it to existing results in the literature. In application to function approximation problems, the approximation accuracy using different basis functions and under different regularity assumptions is established by using the subsampled Poincare inequality. We observe that the error bound blows up as the subsampled lengthscale approaches zero, due to the fact that the underlying function is not regular enough to have well-defined pointwise values. A weighted version of the Poincar e inequality is proposed to address this problem; its optimality is also discussed.



rate research

Read More

127 - Yifan Chen , Thomas Y. Hou 2020
There is an intimate connection between numerical upscaling of multiscale PDEs and scattered data approximation of heterogeneous functions: the coarse variables selected for deriving an upscaled equation (in the former) correspond to the sampled information used for approximation (in the latter). As such, both problems can be thought of as recovering a target function based on some coarse data that are either artificially chosen by an upscaling algorithm, or determined by some physical measurement process. The purpose of this paper is then to study that, under such a setup and for a specific elliptic problem, how the lengthscale of the coarse data, which we refer to as the subsampled lengthscale, influences the accuracy of recovery, given limited computational budgets. Our analysis and experiments identify that, reducing the subsampling lengthscale may improve the accuracy, implying a guiding criterion for coarse-graining or data acquisition in this computationally constrained scenario, especially leading to direct insights for the implementation of the Gamblets method in the numerical homogenization literature. Moreover, reducing the lengthscale to zero may lead to a blow-up of approximation error if the target function does not have enough regularity, suggesting the need for a stronger prior assumption on the target function to be approximated. We introduce a singular weight function to deal with it, both theoretically and numerically. This work sheds light on the interplay of the lengthscale of coarse data, the computational costs, the regularity of the target function, and the accuracy of approximations and numerical simulations.
67 - Gisella Croce 2021
In this paper, we consider a problem in calculus of variations motivated by a quantitative isoperimetric inequality in the plane. More precisely, the aim of this article is the computation of the minimum of the variational problem $$inf_{uinmathcal{W}}frac{displaystyleint_{-pi}^{pi}[(u)^2-u^2]dtheta}{left[int_{-pi}^{pi}|u| dthetaright]^2}$$ where $uin mathcal{W}$ is a $H^1(-pi,pi)$ periodic function, with zero average on $(-pi,pi)$ and orthogonal to sine and cosine.
Stokes variational inequalities arise in the formulation of glaciological problems involving contact. Two important examples of such problems are that of the grounding line of a marine ice sheet and the evolution of a subglacial cavity. In general, rigid modes are present in the velocity space, rendering the variational inequality semicoercive. In this work, we consider a mixed formulation of this variational inequality involving a Lagrange multiplier and provide an analysis of its finite element approximation. Error estimates in the presence of rigid modes are obtained by means of a novel technique involving metric projections onto closed convex cones. Numerical results are reported to validate the error estimates and demonstrate the advantages of using a mixed formulation in a glaciological application.
In this paper we introduce a family of rational approximations of the reciprocal of a $phi$-function involved in the explicit solutions of certain linear differential equations, as well as in integration schemes evolving on manifolds. The derivation and properties of this family of approximations applied to scalar and matrix arguments are presented. Moreover, we show that the matrix functions computed by these approximations exhibit decaying properties comparable to the best existing theoretical bounds. Numerical examples highlight the benefits of the proposed rational approximations w.r.t.~the classical Taylor polynomials and other rational functions.
We analyze the Lanczos method for matrix function approximation (Lanczos-FA), an iterative algorithm for computing $f(mathbf{A}) mathbf{b}$ when $mathbf{A}$ is a Hermitian matrix and $mathbf{b}$ is a given mathbftor. Assuming that $f : mathbb{C} rightarrow mathbb{C}$ is piecewise analytic, we give a framework, based on the Cauchy integral formula, which can be used to derive {em a priori} and emph{a posteriori} error bounds for Lanczos-FA in terms of the error of Lanczos used to solve linear systems. Unlike many error bounds for Lanczos-FA, these bounds account for fine-grained properties of the spectrum of $mathbf{A}$, such as clustered or isolated eigenvalues. Our results are derived assuming exact arithmetic, but we show that they are easily extended to finite precision computations using existing theory about the Lanczos algorithm in finite precision. We also provide generalized bounds for the Lanczos method used to approximate quadratic forms $mathbf{b}^textsf{H} f(mathbf{A}) mathbf{b}$, and demonstrate the effectiveness of our bounds with numerical experiments.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا