ﻻ يوجد ملخص باللغة العربية
We analyze the orthogonal greedy algorithm when applied to dictionaries $mathbb{D}$ whose convex hull has small entropy. We show that if the metric entropy of the convex hull of $mathbb{D}$ decays at a rate of $O(n^{-frac{1}{2}-alpha})$ for $alpha > 0$, then the orthogonal greedy algorithm converges at the same rate. This improves upon the well-known $O(n^{-frac{1}{2}})$ convergence rate of the orthogonal greedy algorithm in many cases, most notably for dictionaries corresponding to shallow neural networks. Finally, we show that these improved rates are sharp under the given entropy decay assumptions.
We present a simple randomized procedure for the prediction of a binary sequence. The algorithm uses ideas from recent developments of the theory of the prediction of individual sequences. We show that if the sequence is a realization of a stationary
We provide a theoretical treatment of over-specified Gaussian mixtures of experts with covariate-free gating networks. We establish the convergence rates of the maximum likelihood estimation (MLE) for these models. Our proof technique is based on a n
Distances to compact sets are widely used in the field of Topological Data Analysis for inferring geometric and topological features from point clouds. In this context, the distance to a probability measure (DTM) has been introduced by Chazal et al.
Kernel ridge regression is an important nonparametric method for estimating smooth functions. We introduce a new set of conditions, under which the actual rates of convergence of the kernel ridge regression estimator under both the L_2 norm and the n
In functional linear regression, the slope ``parameter is a function. Therefore, in a nonparametric context, it is determined by an infinite number of unknowns. Its estimation involves solving an ill-posed problem and has points of contact with a ran