ﻻ يوجد ملخص باللغة العربية
We develop an approach for feature elimination in statistical learning with kernel machines, based on recursive elimination of features.We present theoretical properties of this method and show that it is uniformly consistent in finding the correct feature space under certain generalized assumptions.We present four case studies to show that the assumptions are met in most practical situations and present simulation results to demonstrate performance of the proposed approach.
In this paper, we provide a precise characterization of generalization properties of high dimensional kernel ridge regression across the under- and over-parameterized regimes, depending on whether the number of training data n exceeds the feature dim
This paper studies the estimation of the conditional density f (x, $times$) of Y i given X i = x, from the observation of an i.i.d. sample (X i , Y i) $in$ R d , i = 1,. .. , n. We assume that f depends only on r unknown components with typically r d
Modern deep learning models employ considerably more parameters than required to fit the training data. Whereas conventional statistical wisdom suggests such models should drastically overfit, in practice these models generalize remarkably well. An e
In this paper we solve support vector machines in reproducing kernel Banach spaces with reproducing kernels defined on nonsymmetric domains instead of the traditional methods in reproducing kernel Hilbert spaces. Using the orthogonality of semi-inner
This article is concerned with the spectral behavior of $p$-dimensional linear processes in the moderately high-dimensional case when both dimensionality $p$ and sample size $n$ tend to infinity so that $p/nto0$. It is shown that, under an appropriat