No Arabic abstract
Given a function $uin L^2=L^2(D,mu)$, where $Dsubset mathbb R^d$ and $mu$ is a measure on $D$, and a linear subspace $V_nsubset L^2$ of dimension $n$, we show that near-best approximation of $u$ in $V_n$ can be computed from a near-optimal budget of $Cn$ pointwise evaluations of $u$, with $C>1$ a universal constant. The sampling points are drawn according to some random distribution, the approximation is computed by a weighted least-squares method, and the error is assessed in expected $L^2$ norm. This result improves on the results in [6,8] which require a sampling budget that is sub-optimal by a logarithmic factor, thanks to a sparsification strategy introduced in [17,18]. As a consequence, we obtain for any compact class $mathcal Ksubset L^2$ that the sampling number $rho_{Cn}^{rm rand}(mathcal K)_{L^2}$ in the randomized setting is dominated by the Kolmogorov $n$-width $d_n(mathcal K)_{L^2}$. While our result shows the existence of a randomized sampling with such near-optimal properties, we discuss remaining issues concerning its generation by a computationally efficient algorithm.
In this paper, we investigate the randomized algorithms for block matrix multiplication from random sampling perspective. Based on the A-optimal design criterion, the optimal sampling probabilities and sampling block sizes are obtained. To improve the practicability of the block sizes, two modified ones with less computation cost are provided. With respect to the second one, a two step algorithm is also devised. Moreover, the probability error bounds for the proposed algorithms are given. Extensive numerical results show that our methods outperform the existing one in the literature.
While it is well known that nonlinear methods of approximation can often perform dramatically better than linear methods, there are still questions on how to measure the optimal performance possible for such methods. This paper studies nonlinear methods of approximation that are compatible with numerical implementation in that they are required to be numerically stable. A measure of optimal performance, called {em stable manifold widths}, for approximating a model class $K$ in a Banach space $X$ by stable manifold methods is introduced. Fundamental inequalities between these stable manifold widths and the entropy of $K$ are established. The effects of requiring stability in the settings of deep learning and compressed sensing are discussed.
In this article we obtain an optimal best approximation type result for fully discrete approximations of the transient Stokes problem. For the time discretization we use the discontinuous Galerkin method and for the spatial discretization we use standard finite elements for the Stokes problem satisfying the discrete inf-sup condition. The analysis uses the technique of discrete maximal parabolic regularity. The results require only natural assumptions on the data and do not assume any additional smoothness of the solutions.
This paper studies numerical methods for the approximation of elliptic PDEs with lognormal coefficients of the form $-{rm div}(a abla u)=f$ where $a=exp(b)$ and $b$ is a Gaussian random field. The approximant of the solution $u$ is an $n$-term polynomial expansion in the scalar Gaussian random variables that parametrize $b$. We present a general convergence analysis of weighted least-squares approximants for smooth and arbitrarily rough random field, using a suitable random design, for which we prove optimality in the following sense: their convergence rate matches exactly or closely the rate that has been established in cite{BCDM} for best $n$-term approximation by Hermite polynomials, under the same minimial assumptions on the Gaussian random field. This is in contrast with the current state of the art results for the stochastic Galerkin method that suffers the lack of coercivity due to the lognormal nature of the diffusion field. Numerical tests with $b$ as the Brownian bridge confirm our theoretical findings.
We consider the problem of reconstructing an unknown function $uin L^2(D,mu)$ from its evaluations at given sampling points $x^1,dots,x^min D$, where $Dsubset mathbb R^d$ is a general domain and $mu$ a probability measure. The approximation is picked from a linear space $V_n$ of interest where $n=dim(V_n)$. Recent results have revealed that certain weighted least-squares methods achieve near best approximation with a sampling budget $m$ that is proportional to $n$, up to a logarithmic factor $ln(2n/varepsilon)$, where $varepsilon>0$ is a probability of failure. The sampling points should be picked at random according to a well-chosen probability measure $sigma$ whose density is given by the inverse Christoffel function that depends both on $V_n$ and $mu$. While this approach is greatly facilitated when $D$ and $mu$ have tensor product structure, it becomes problematic for domains $D$ with arbitrary geometry since the optimal measure depends on an orthonormal basis of $V_n$ in $L^2(D,mu)$ which is not explicitly given, even for simple polynomial spaces. Therefore sampling according to this measure is not practically feasible. In this paper, we discuss practical sampling strategies, which amount to using a perturbed measure $widetilde sigma$ that can be computed in an offline stage, not involving the measurement of $u$. We show that near best approximation is attained by the resulting weighted least-squares method at near-optimal sampling budget and we discuss multilevel approaches that preserve optimality of the cumulated sampling budget when the spaces $V_n$ are iteratively enriched. These strategies rely on the knowledge of a-priori upper bounds on the inverse Christoffel function. We establish such bounds for spaces $V_n$ of multivariate algebraic polynomials, and for general domains $D$.