Do you want to publish a course? Click here

Optimal sampling and Christoffel functions on general domains

189   0   0.0 ( 0 )
 Added by Matthieu Dolbeault
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We consider the problem of reconstructing an unknown function $uin L^2(D,mu)$ from its evaluations at given sampling points $x^1,dots,x^min D$, where $Dsubset mathbb R^d$ is a general domain and $mu$ a probability measure. The approximation is picked from a linear space $V_n$ of interest where $n=dim(V_n)$. Recent results have revealed that certain weighted least-squares methods achieve near best approximation with a sampling budget $m$ that is proportional to $n$, up to a logarithmic factor $ln(2n/varepsilon)$, where $varepsilon>0$ is a probability of failure. The sampling points should be picked at random according to a well-chosen probability measure $sigma$ whose density is given by the inverse Christoffel function that depends both on $V_n$ and $mu$. While this approach is greatly facilitated when $D$ and $mu$ have tensor product structure, it becomes problematic for domains $D$ with arbitrary geometry since the optimal measure depends on an orthonormal basis of $V_n$ in $L^2(D,mu)$ which is not explicitly given, even for simple polynomial spaces. Therefore sampling according to this measure is not practically feasible. In this paper, we discuss practical sampling strategies, which amount to using a perturbed measure $widetilde sigma$ that can be computed in an offline stage, not involving the measurement of $u$. We show that near best approximation is attained by the resulting weighted least-squares method at near-optimal sampling budget and we discuss multilevel approaches that preserve optimality of the cumulated sampling budget when the spaces $V_n$ are iteratively enriched. These strategies rely on the knowledge of a-priori upper bounds on the inverse Christoffel function. We establish such bounds for spaces $V_n$ of multivariate algebraic polynomials, and for general domains $D$.



rate research

Read More

Fourier extension is an approximation method that alleviates the periodicity requirements of Fourier series and avoids the Gibbs phenomenon when approximating functions. We describe a similar extension approach using regular wavelet bases on a hypercube to approximate functions on subsets of that cube. These subsets may have a general shape. This construction is inherently associated with redundancy which leads to severe ill-conditioning, but recent theory shows that nevertheless high accuracy and numerical stability can be achieved using regularization and oversampling. Regularized least squares solvers, such as the truncated singular value decomposition, that are suited to solve the resulting ill-conditioned and skinny linear system generally have cubic computational cost. We compare several algorithms that improve on this complexity. The improvements benefit from the sparsity in and the structure of the discrete wavelet transform. We present a method that requires $mathcal O(N)$ operations in 1-D and $mathcal O(N^{3(d-1)/d})$ in $d$-D, $d>1$. We experimentally show that direct sparse QR solvers appear to be more time-efficient, but yield larger expansion coefficients.
Given a function $uin L^2=L^2(D,mu)$, where $Dsubset mathbb R^d$ and $mu$ is a measure on $D$, and a linear subspace $V_nsubset L^2$ of dimension $n$, we show that near-best approximation of $u$ in $V_n$ can be computed from a near-optimal budget of $Cn$ pointwise evaluations of $u$, with $C>1$ a universal constant. The sampling points are drawn according to some random distribution, the approximation is computed by a weighted least-squares method, and the error is assessed in expected $L^2$ norm. This result improves on the results in [6,8] which require a sampling budget that is sub-optimal by a logarithmic factor, thanks to a sparsification strategy introduced in [17,18]. As a consequence, we obtain for any compact class $mathcal Ksubset L^2$ that the sampling number $rho_{Cn}^{rm rand}(mathcal K)_{L^2}$ in the randomized setting is dominated by the Kolmogorov $n$-width $d_n(mathcal K)_{L^2}$. While our result shows the existence of a randomized sampling with such near-optimal properties, we discuss remaining issues concerning its generation by a computationally efficient algorithm.
149 - Chengmei Niu , Hanyu Li 2021
In this paper, we investigate the randomized algorithms for block matrix multiplication from random sampling perspective. Based on the A-optimal design criterion, the optimal sampling probabilities and sampling block sizes are obtained. To improve the practicability of the block sizes, two modified ones with less computation cost are provided. With respect to the second one, a two step algorithm is also devised. Moreover, the probability error bounds for the proposed algorithms are given. Extensive numerical results show that our methods outperform the existing one in the literature.
We study the recovery of multivariate functions from reproducing kernel Hilbert spaces in the uniform norm. Our main interest is to obtain preasymptotic estimates for the corresponding sampling numbers. We obtain results in terms of the decay of related singular numbers of the compact embedding into $L_2(D,varrho_D)$ multiplied with the supremum of the Christoffel function of the subspace spanned by the first $m$ singular functions. Here the measure $varrho_D$ is at our disposal. As an application we obtain near optimal upper bounds for the sampling numbers for periodic Sobolev type spaces with general smoothness weight. Those can be bounded in terms of the corresponding benchmark approximation number in the uniform norm, which allows for preasymptotic bounds. By applying a recently introduced sub-sampling technique related to Weavers conjecture we mostly lose a $sqrt{log n}$ and sometimes even less. Finally we point out a relation to the corresponding Kolmogorov numbers.
In this paper we study $L_2$-norm sampling discretization and sampling recovery of complex-valued functions in RKHS on $D subset R^d$ based on random function samples. We only assume the finite trace of the kernel (Hilbert-Schmidt embedding into $L_2$) and provide several concrete estimates with precise constants for the corresponding worst-case errors. In general, our analysis does not need any additional assumptions and also includes the case of non-Mercer kernels and also non-separable RKHS. The fail probability is controlled and decays polynomially in $n$, the number of samples. Under the mild additional assumption of separability we observe improved rates of convergence related to the decay of the singular values. Our main tool is a spectral norm concentration inequality for infinite complex random matrices with independent rows complementing earlier results by Rudelson, Mendelson, Pajor, Oliveira and Rauhut.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا