Do you want to publish a course? Click here

An optimal approximation formula for functions with singularities

60   0   0.0 ( 0 )
 Added by Ken'ichiro Tanaka
 Publication date 2016
  fields
and research's language is English




Ask ChatGPT about the research

We propose an optimal approximation formula for analytic functions that are defined on a complex region containing the real interval $(-1,1)$ and possibly have algebraic singularities at the endpoints of the interval. As a space of such functions,we consider a Hardy space with the weight given by $w_{mu}(z) = (1-z^{2})^{mu/2}$ for $mu > 0$, and formulate the optimality of an approximation formula for the functions in the space. Then, we propose an optimal approximation formula for the space for any $mu > 0$ as opposed to existing results with the restriction $0 < mu < mu_{ast}$ for a certain constant $mu_{ast}$. We also provide the results of numerical experiments to show the performance of the proposed formula.



rate research

Read More

This paper studies numerical methods for the approximation of elliptic PDEs with lognormal coefficients of the form $-{rm div}(a abla u)=f$ where $a=exp(b)$ and $b$ is a Gaussian random field. The approximant of the solution $u$ is an $n$-term polynomial expansion in the scalar Gaussian random variables that parametrize $b$. We present a general convergence analysis of weighted least-squares approximants for smooth and arbitrarily rough random field, using a suitable random design, for which we prove optimality in the following sense: their convergence rate matches exactly or closely the rate that has been established in cite{BCDM} for best $n$-term approximation by Hermite polynomials, under the same minimial assumptions on the Gaussian random field. This is in contrast with the current state of the art results for the stochastic Galerkin method that suffers the lack of coercivity due to the lognormal nature of the diffusion field. Numerical tests with $b$ as the Brownian bridge confirm our theoretical findings.
149 - V.N. Temlyakov 2015
The paper gives a constructive method, based on greedy algorithms, that provides for the classes of functions with small mixed smoothness the best possible in the sense of order approximation error for the $m$-term approximation with respect to the trigonometric system.
Given a function $uin L^2=L^2(D,mu)$, where $Dsubset mathbb R^d$ and $mu$ is a measure on $D$, and a linear subspace $V_nsubset L^2$ of dimension $n$, we show that near-best approximation of $u$ in $V_n$ can be computed from a near-optimal budget of $Cn$ pointwise evaluations of $u$, with $C>1$ a universal constant. The sampling points are drawn according to some random distribution, the approximation is computed by a weighted least-squares method, and the error is assessed in expected $L^2$ norm. This result improves on the results in [6,8] which require a sampling budget that is sub-optimal by a logarithmic factor, thanks to a sparsification strategy introduced in [17,18]. As a consequence, we obtain for any compact class $mathcal Ksubset L^2$ that the sampling number $rho_{Cn}^{rm rand}(mathcal K)_{L^2}$ in the randomized setting is dominated by the Kolmogorov $n$-width $d_n(mathcal K)_{L^2}$. While our result shows the existence of a randomized sampling with such near-optimal properties, we discuss remaining issues concerning its generation by a computationally efficient algorithm.
While it is well known that nonlinear methods of approximation can often perform dramatically better than linear methods, there are still questions on how to measure the optimal performance possible for such methods. This paper studies nonlinear methods of approximation that are compatible with numerical implementation in that they are required to be numerically stable. A measure of optimal performance, called {em stable manifold widths}, for approximating a model class $K$ in a Banach space $X$ by stable manifold methods is introduced. Fundamental inequalities between these stable manifold widths and the entropy of $K$ are established. The effects of requiring stability in the settings of deep learning and compressed sensing are discussed.
We propose and analyze a numerical method to solve an elliptic transmission problem in full space. The method consists of a variational formulation involving standard boundary integral operators on the coupling interface and an ultra-weak formulation in the interior. To guarantee the discrete inf-sup condition, the system is discretized by the DPG method with optimal test functions. We prove that principal unknowns are approximated quasi-optimally. Numerical experiments for problems with smooth and singular solutions confirm optimal convergence orders.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا