ﻻ يوجد ملخص باللغة العربية
We study the relationship between the eluder dimension for a function class and a generalized notion of rank, defined for any monotone activation $sigma : mathbb{R} to mathbb{R}$, which corresponds to the minimal dimension required to represent the class as a generalized linear model. When $sigma$ has derivatives bounded away from $0$, it is known that $sigma$-rank gives rise to an upper bound on eluder dimension for any function class; we show however that eluder dimension can be exponentially smaller than $sigma$-rank. We also show that the condition on the derivative is necessary; namely, when $sigma$ is the $mathrm{relu}$ activation, we show that eluder dimension can be exponentially larger than $sigma$-rank.
Eluder dimension and information gain are two widely used methods of complexity measures in bandit and reinforcement learning. Eluder dimension was originally proposed as a general complexity measure of function classes, but the common examples of wh
We propose a sparse and low-rank tensor regression model to relate a univariate outcome to a feature tensor, in which each unit-rank tensor from the CP decomposition of the coefficient tensor is assumed to be sparse. This structure is both parsimonio
In order to deal with the curse of dimensionality in reinforcement learning (RL), it is common practice to make parametric assumptions where values or policies are functions of some low dimensional feature space. This work focuses on the representati
We introduce an invariant, called mean rank, for any module M of the integral group ring of a discrete amenable group $Gamma$, as an analogue of the rank of an abelian group. It is shown that the mean dimension of the induced $Gamma$-action on the Po
Decision tree optimization is notoriously difficult from a computational perspective but essential for the field of interpretable machine learning. Despite efforts over the past 40 years, only recently have optimization breakthroughs been made that h