ترغب بنشر مسار تعليمي؟ اضغط هنا

Gram matrices of reproducing kernel Hilbert spaces over graphs

123   0   0.0 ( 0 )
 نشر من قبل Sho Suda
 تاريخ النشر 2012
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we introduce the notion of reproducing kernel Hilbert spaces for graphs and the Gram matrices associated with them. Our aim is to investigate the Gram matrices of reproducing kernel Hilbert spaces. We provide several bounds on the entries of the Gram matrices of reproducing kernel Hilbert spaces and characterize the graphs which attain our bounds.



قيم البحث

اقرأ أيضاً

The geometry of spaces with indefinite inner product, known also as Krein spaces, is a basic tool for developing Operator Theory therein. In the present paper we establish a link between this geometry and the algebraic theory of *-semigroups. It goes via the positive definite functions and related to them reproducing kernel Hilbert spaces. Our concern is in describing properties of elements of the semigroup which determine shift operators which serve as Pontryagin fundamental symmetries
154 - Sneh Lata , Vern I. Paulsen 2010
We prove two new equivalences of the Feichtinger conjecture that involve reproducing kernel Hilbert spaces. We prove that if for every Hilbert space, contractively contained in the Hardy space, each Bessel sequence of normalized kernel functions can be partitioned into finitely many Riesz basic sequences, then a general bounded Bessel sequence in an arbitrary Hilbert space can be partitioned into finitely many Riesz basic sequences. In addition, we examine some of these spaces and prove that for these spaces bounded Bessel sequences of normalized kernel functions are finite unions of Riesz basic sequences.
Let $G$ be a locally compact abelian group with a Haar measure, and $Y$ be a measure space. Suppose that $H$ is a reproducing kernel Hilbert space of functions on $Gtimes Y$, such that $H$ is naturally embedded into $L^2(Gtimes Y)$ and is invariant u nder the translations associated with the elements of $G$. Under some additional technical assumptions, we study the W*-algebra $mathcal{V}$ of translation-invariant bounded linear operators acting on $H$. First, we decompose $mathcal{V}$ into the direct integral of the W*-algebras of bounded operators acting on the reproducing kernel Hilbert spaces $widehat{H}_xi$, $xiinwidehat{G}$, generated by the Fourier transform of the reproducing kernel. Second, we give a constructive criterion for the commutativity of $mathcal{V}$. Third, in the commutative case, we construct a unitary operator that simultaneously diagonalizes all operators belonging to $mathcal{V}$, i.e., converts them into some multiplication operators. Our scheme generalizes many examples previously studied by Nikolai Vasilevski and other authors.
The Gaussian kernel plays a central role in machine learning, uncertainty quantification and scattered data approximation, but has received relatively little attention from a numerical analysis standpoint. The basic problem of finding an algorithm fo r efficient numerical integration of functions reproduced by Gaussian kernels has not been fully solved. In this article we construct two classes of algorithms that use $N$ evaluations to integrate $d$-variate functions reproduced by Gaussian kernels and prove the exponential or super-algebraic decay of their worst-case errors. In contrast to earlier work, no constraints are placed on the length-scale parameter of the Gaussian kernel. The first class of algorithms is obtained via an appropriate scaling of the classical Gauss-Hermite rules. For these algorithms we derive lower and upper bounds on the worst-case error of the forms $exp(-c_1 N^{1/d}) N^{1/(4d)}$ and $exp(-c_2 N^{1/d}) N^{-1/(4d)}$, respectively, for positive constants $c_1 > c_2$. The second class of algorithms we construct is more flexible and uses worst-case optimal weights for points that may be taken as a nested sequence. For these algorithms we derive upper bounds of the form $exp(-c_3 N^{1/(2d)})$ for a positive constant $c_3$.
Motivated by the success of reinforcement learning (RL) for discrete-time tasks such as AlphaGo and Atari games, there has been a recent surge of interest in using RL for continuous-time control of physical systems (cf. many challenging tasks in Open AI Gym and DeepMind Control Suite). Since discretization of time is susceptible to error, it is methodologically more desirable to handle the system dynamics directly in continuous time. However, very few techniques exist for continuous-time RL and they lack flexibility in value function approximation. In this paper, we propose a novel framework for model-based continuous-time value function approximation in reproducing kernel Hilbert spaces. The resulting framework is so flexible that it can accommodate any kind of kernel-based approach, such as Gaussian processes and kernel adaptive filters, and it allows us to handle uncertainties and nonstationarity without prior knowledge about the environment or what basis functions to employ. We demonstrate the validity of the presented framework through experiments.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا