No Arabic abstract
We study two geometric properties of reproducing kernels in model spaces $K_theta$where $theta$ is an inner function in the disc: overcompleteness and existence of uniformly minimalsystems of reproducing kernels which do not contain Riesz basic sequences. Both of these properties are related to the notion of the Ahern--Clark point. It is shown that uniformly minimal non-Riesz$ $ sequences of reproducing kernelsexist near each Ahern--Clark point which is not an analyticity point for $theta$, whileovercompleteness may occur only near the Ahern--Clark points of infinite orderand is equivalent to a zero localization property. In this context the notion ofquasi-analyticity appears naturally, and as a by-product of our results we give conditions in thespirit of Ahern--Clark for the restriction of a model space to a radius to be a class ofquasi-analyticity.
We use reproducing kernel methods to study various rigidity problems. The methods and setting allow us to also consider the non-positive case.
For any real $beta$ let $H^2_beta$ be the Hardy-Sobolev space on the unit disk $D$. $H^2_beta$ is a reproducing kernel Hilbert space and its reproducing kernel is bounded when $beta>1/2$. In this paper, we study composition operators $C_varphi$ on $H^2_beta$ for $1/2<beta<1$. Our main result is that, for a non-constant analytic function $varphi:DtoD$, the operator $C_{varphi }$ has dense range in $H_{beta }^{2}$ if and only if the polynomials are dense in a certain Dirichlet space of the domain $varphi(D)$. It follows that if the range of $C_{varphi }$ is dense in $H_{beta }^{2}$, then $varphi $ is a weak-star generator of $H^{infty}$. Note that this conclusion is false for the classical Dirichlet space $mathfrak{D}$. We also characterize Fredholm composition operators on $H^{2}_{beta }$.
The Gaussian kernel plays a central role in machine learning, uncertainty quantification and scattered data approximation, but has received relatively little attention from a numerical analysis standpoint. The basic problem of finding an algorithm for efficient numerical integration of functions reproduced by Gaussian kernels has not been fully solved. In this article we construct two classes of algorithms that use $N$ evaluations to integrate $d$-variate functions reproduced by Gaussian kernels and prove the exponential or super-algebraic decay of their worst-case errors. In contrast to earlier work, no constraints are placed on the length-scale parameter of the Gaussian kernel. The first class of algorithms is obtained via an appropriate scaling of the classical Gauss-Hermite rules. For these algorithms we derive lower and upper bounds on the worst-case error of the forms $exp(-c_1 N^{1/d}) N^{1/(4d)}$ and $exp(-c_2 N^{1/d}) N^{-1/(4d)}$, respectively, for positive constants $c_1 > c_2$. The second class of algorithms we construct is more flexible and uses worst-case optimal weights for points that may be taken as a nested sequence. For these algorithms we derive upper bounds of the form $exp(-c_3 N^{1/(2d)})$ for a positive constant $c_3$.
We introduce a vector differential operator $mathbf{P}$ and a vector boundary operator $mathbf{B}$ to derive a reproducing kernel along with its associated Hilbert space which is shown to be embedded in a classical Sobolev space. This reproducing kernel is a Green kernel of differential operator $L:=mathbf{P}^{ast T}mathbf{P}$ with homogeneous or nonhomogeneous boundary conditions given by $mathbf{B}$, where we ensure that the distributional adjoint operator $mathbf{P}^{ast}$ of $mathbf{P}$ is well-defined in the distributional sense. We represent the inner product of the reproducing-kernel Hilbert space in terms of the operators $mathbf{P}$ and $mathbf{B}$. In addition, we find relationships for the eigenfunctions and eigenvalues of the reproducing kernel and the operators with homogeneous or nonhomogeneous boundary conditions. These eigenfunctions and eigenvalues are used to compute a series expansion of the reproducing kernel and an orthonormal basis of the reproducing-kernel Hilbert space. Our theoretical results provide perhaps a more intuitive way of understanding what kind of functions are well approximated by the reproducing kernel-based interpolant to a given multivariate data sample.
The subject of this paper is Beurlings celebrated extension of the Riemann mapping theorem cite{Beu53}. Our point of departure is the observation that the only known proof of the Beurling-Riemann mapping theorem contains a number of gaps which seem inherent in Beurlings geometric and approximative approach. We provide a complete proof of the Beurling-Riemann mapping theorem by combining Beurlings geometric method with a number of new analytic tools, notably $H^p$-space techniques and methods from the theory of Riemann-Hilbert-Poincare problems. One additional advantage of this approach is that it leads to an extension of the Beurling-Riemann mapping theorem for analytic maps with prescribed branching. Moreover, it allows a complete description of the boundary regularity of solutions in the (generalized) Beurling-Riemann mapping theorem extending earlier results that have been obtained by PDE techniques. We finally consider the question of uniqueness in the extended Beurling-Riemann mapping theorem.