ترغب بنشر مسار تعليمي؟ اضغط هنا

A Short Note on the Relationship of Information Gain and Eluder Dimension

307   0   0.0 ( 0 )
 نشر من قبل Kaixuan Huang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Eluder dimension and information gain are two widely used methods of complexity measures in bandit and reinforcement learning. Eluder dimension was originally proposed as a general complexity measure of function classes, but the common examples of where it is known to be small are function spaces (vector spaces). In these cases, the primary tool to upper bound the eluder dimension is the elliptic potential lemma. Interestingly, the elliptic potential lemma also features prominently in the analysis of linear bandits/reinforcement learning and their nonparametric generalization, the information gain. We show that this is not a coincidence -- eluder dimension and information gain are equivalent in a precise sense for reproducing kernel Hilbert spaces.

قيم البحث

اقرأ أيضاً

We study the relationship between the eluder dimension for a function class and a generalized notion of rank, defined for any monotone activation $sigma : mathbb{R} to mathbb{R}$, which corresponds to the minimal dimension required to represent the c lass as a generalized linear model. When $sigma$ has derivatives bounded away from $0$, it is known that $sigma$-rank gives rise to an upper bound on eluder dimension for any function class; we show however that eluder dimension can be exponentially smaller than $sigma$-rank. We also show that the condition on the derivative is necessary; namely, when $sigma$ is the $mathrm{relu}$ activation, we show that eluder dimension can be exponentially larger than $sigma$-rank.
147 - Koji Fujiwara , Kevin Whyte 2006
Let $X$ be a geodesic metric space with $H_1(X)$ uniformly generated. If $X$ has asymptotic dimension one then $X$ is quasi-isometric to an unbounded tree. As a corollary, we show that the asymptotic dimension of the curve graph of a compact, oriente d surface with genus $g ge 2$ and one boundary component is at least two.
The purpose of this note is to record a consequence, for general metric spaces, of a recent result of David Bate. We prove the following fact: Let $X$ be a compact metric space of topological dimension $n$. Suppose that the $n$-dimensional Hausdorff measure of $X$, $mathcal H^n(X)$, is finite. Suppose further that the lower n-density of the measure $mathcal H^n$ is positive, $mathcal H^n$-almost everywhere in $X$. Then $X$ contains an $n$-rectifiable subset of positive $mathcal H^n$-measure. Moreover, the assumption on the lower density is unnecessary if one uses recently announced results of Csornyei-Jones.
87 - Yorick Hardy 2015
The results of [I. Ojeda, Amer. Math. Monthly, 122, pp 60--64] provides a characterization of Kronecker square roots of matrices in terms of the symmetry and rank of the block vec matrix (rearrangement matrix). In this short note we reformulate the c haracterization in terms of rank only by considering an alternative to the block vec matrix, provided that the characteristic of the underlying field is not equal to 2.
Estimators for mutual information are typically biased. However, in the case of the Kozachenko-Leonenko estimator for metric spaces, a type of nearest neighbour estimator, it is possible to calculate the bias explicitly.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا