ﻻ يوجد ملخص باللغة العربية
We consider a problem of multiclass classification, where the training sample $S_n = {(X_i, Y_i)}_{i=1}^n$ is generated from the model $mathbb P(Y = m | X = x) = eta_m(x)$, $1 leq m leq M$, and $eta_1(x), dots, eta_M(x)$ are unknown $alpha$-Holder continuous functions.Given a test point $X$, our goal is to predict its label. A widely used $mathsf k$-nearest-neighbors classifier constructs estimates of $eta_1(X), dots, eta_M(X)$ and uses a plug-in rule for the prediction. However, it requires a proper choice of the smoothing parameter $mathsf k$, which may become tricky in some situations. In our solution, we fix several integers $n_1, dots, n_K$, compute corresponding $n_k$-nearest-neighbor estimates for each $m$ and each $n_k$ and apply an aggregation procedure. We study an algorithm, which constructs a convex combination of these estimates such that the aggregated estimate behaves approximately as well as an oracle choice. We also provide a non-asymptotic analysis of the procedure, prove its adaptation to the unknown smoothness parameter $alpha$ and to the margin and establish rates of convergence under mild assumptions.
When data is of an extraordinarily large size or physically stored in different locations, the distributed nearest neighbor (NN) classifier is an attractive tool for classification. We propose a novel distributed adaptive NN classifier for which the
kNN-MT, recently proposed by Khandelwal et al. (2020a), successfully combines pre-trained neural machine translation (NMT) model with token-level k-nearest-neighbor (kNN) retrieval to improve the translation accuracy. However, the traditional kNN alg
Previously in 2014, we proposed the Nearest Descent (ND) method, capable of generating an efficient Graph, called the in-tree (IT). Due to some beautiful and effective features, this IT structure proves well suited for data clustering. Although there
Both the median-based classifier and the quantile-based classifier are useful for discriminating high-dimensional data with heavy-tailed or skewed inputs. But these methods are restricted as they assign equal weight to each variable in an unregulariz
Information-theoretic quantities, such as conditional entropy and mutual information, are critical data summaries for quantifying uncertainty. Current widely used approaches for computing such quantities rely on nearest neighbor methods and exhibit b