No Arabic abstract
We study the recovery of multivariate functions from reproducing kernel Hilbert spaces in the uniform norm. Our main interest is to obtain preasymptotic estimates for the corresponding sampling numbers. We obtain results in terms of the decay of related singular numbers of the compact embedding into $L_2(D,varrho_D)$ multiplied with the supremum of the Christoffel function of the subspace spanned by the first $m$ singular functions. Here the measure $varrho_D$ is at our disposal. As an application we obtain near optimal upper bounds for the sampling numbers for periodic Sobolev type spaces with general smoothness weight. Those can be bounded in terms of the corresponding benchmark approximation number in the uniform norm, which allows for preasymptotic bounds. By applying a recently introduced sub-sampling technique related to Weavers conjecture we mostly lose a $sqrt{log n}$ and sometimes even less. Finally we point out a relation to the corresponding Kolmogorov numbers.
In this paper we study $L_2$-norm sampling discretization and sampling recovery of complex-valued functions in RKHS on $D subset R^d$ based on random function samples. We only assume the finite trace of the kernel (Hilbert-Schmidt embedding into $L_2$) and provide several concrete estimates with precise constants for the corresponding worst-case errors. In general, our analysis does not need any additional assumptions and also includes the case of non-Mercer kernels and also non-separable RKHS. The fail probability is controlled and decays polynomially in $n$, the number of samples. Under the mild additional assumption of separability we observe improved rates of convergence related to the decay of the singular values. Our main tool is a spectral norm concentration inequality for infinite complex random matrices with independent rows complementing earlier results by Rudelson, Mendelson, Pajor, Oliveira and Rauhut.
In this paper we present results on asymptotic characteristics of multivariate function classes in the uniform norm. Our main interest is the approximation of functions with mixed smoothness parameter not larger than $1/2$. Our focus will be on the behavior of the best $m$-term trigonometric approximation as well as the decay of Kolmogorov and entropy numbers in the uniform norm. It turns out that these quantities share a few fundamental abstract properties like their behavior under real interpolation, such that they can be treated simultaneously. We start with proving estimates on finite rank convolution operators with range in a step hyperbolic cross. These results imply bounds for the corresponding function space embeddings by a well-known decomposition technique. The decay of Kolmogorov numbers have direct implications for the problem of sampling recovery in $L_2$ in situations where recent results in the literature are not applicable since the corresponding approximation numbers are not square summable.
We consider the problem of reconstructing an unknown function $uin L^2(D,mu)$ from its evaluations at given sampling points $x^1,dots,x^min D$, where $Dsubset mathbb R^d$ is a general domain and $mu$ a probability measure. The approximation is picked from a linear space $V_n$ of interest where $n=dim(V_n)$. Recent results have revealed that certain weighted least-squares methods achieve near best approximation with a sampling budget $m$ that is proportional to $n$, up to a logarithmic factor $ln(2n/varepsilon)$, where $varepsilon>0$ is a probability of failure. The sampling points should be picked at random according to a well-chosen probability measure $sigma$ whose density is given by the inverse Christoffel function that depends both on $V_n$ and $mu$. While this approach is greatly facilitated when $D$ and $mu$ have tensor product structure, it becomes problematic for domains $D$ with arbitrary geometry since the optimal measure depends on an orthonormal basis of $V_n$ in $L^2(D,mu)$ which is not explicitly given, even for simple polynomial spaces. Therefore sampling according to this measure is not practically feasible. In this paper, we discuss practical sampling strategies, which amount to using a perturbed measure $widetilde sigma$ that can be computed in an offline stage, not involving the measurement of $u$. We show that near best approximation is attained by the resulting weighted least-squares method at near-optimal sampling budget and we discuss multilevel approaches that preserve optimality of the cumulated sampling budget when the spaces $V_n$ are iteratively enriched. These strategies rely on the knowledge of a-priori upper bounds on the inverse Christoffel function. We establish such bounds for spaces $V_n$ of multivariate algebraic polynomials, and for general domains $D$.
We tensorize the Faber spline system from [14] to prove sequence space isomorphisms for multivariate function spaces with higher mixed regularity. The respective basis coefficients are local linear combinations of discrete function values similar as for the classical Faber Schauder system. This allows for a sparse representation of the function using a truncated series expansion by only storing discrete (finite) set of function values. The set of nodes where the function values are taken depends on the respective function in a non-linear way. Indeed, if we choose the basis functions adaptively it requires significantly less function values to represent the initial function up to accuracy $varepsilon>0$ (say in $L_infty$) compared to hyperbolic cross projections. In addition, due to the higher regularity of the Faber splines we overcome the (mixed) smoothness restriction $r<2$ and benefit from higher mixed regularity of the function. As a byproduct we present the solution of Problem 3.13 in the Triebel monograph [46] for the multivariate setting.
In this paper, we consider the minimization of a Tikhonov functional with an $ell_1$ penalty for solving linear inverse problems with sparsity constraints. One of the many approaches used to solve this problem uses the Nemskii operator to transform the Tikhonov functional into one with an $ell_2$ penalty term but a nonlinear operator. The transformed problem can then be analyzed and minimized using standard methods. However, by the nature of this transform, the resulting functional is only once continuously differentiable, which prohibits the use of second order methods. Hence, in this paper, we propose a different transformation, which leads to a twice differentiable functional that can now be minimized using efficient second order methods like Newtons method. We provide a convergence analysis of our proposed scheme, as well as a number of numerical results showing the usefulness of our proposed approach.