ترغب بنشر مسار تعليمي؟ اضغط هنا

Predictive Power of Nearest Neighbors Algorithm under Random Perturbation

161   0   0.0 ( 0 )
 نشر من قبل Guang Cheng
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider a data corruption scenario in the classical $k$ Nearest Neighbors ($k$-NN) algorithm, that is, the testing data are randomly perturbed. Under such a scenario, the impact of corruption level on the asymptotic regret is carefully characterized. In particular, our theoretical analysis reveals a phase transition phenomenon that, when the corruption level $omega$ is below a critical order (i.e., small-$omega$ regime), the asymptotic regret remains the same; when it is beyond that order (i.e., large-$omega$ regime), the asymptotic regret deteriorates polynomially. Surprisingly, we obtain a negative result that the classical noise-injection approach will not help improve the testing performance in the beginning stage of the large-$omega$ regime, even in the level of the multiplicative constant of asymptotic regret. As a technical by-product, we prove that under different model assumptions, the pre-processed 1-NN proposed in cite{xue2017achieving} will at most achieve a sub-optimal rate when the data dimension $d>4$ even if $k$ is chosen optimally in the pre-processing step.



قيم البحث

اقرأ أيضاً

In this paper, we propose an ensemble learning algorithm called textit{under-bagging $k$-nearest neighbors} (textit{under-bagging $k$-NN}) for imbalanced classification problems. On the theoretical side, by developing a new learning theory analysis, we show that with properly chosen parameters, i.e., the number of nearest neighbors $k$, the expected sub-sample size $s$, and the bagging rounds $B$, optimal convergence rates for under-bagging $k$-NN can be achieved under mild assumptions w.r.t.~the arithmetic mean (AM) of recalls. Moreover, we show that with a relatively small $B$, the expected sub-sample size $s$ can be much smaller than the number of training data $n$ at each bagging round, and the number of nearest neighbors $k$ can be reduced simultaneously, especially when the data are highly imbalanced, which leads to substantially lower time complexity and roughly the same space complexity. On the practical side, we conduct numerical experiments to verify the theoretical results on the benefits of the under-bagging technique by the promising AM performance and efficiency of our proposed algorithm.
When data is of an extraordinarily large size or physically stored in different locations, the distributed nearest neighbor (NN) classifier is an attractive tool for classification. We propose a novel distributed adaptive NN classifier for which the number of nearest neighbors is a tuning parameter stochastically chosen by a data-driven criterion. An early stopping rule is proposed when searching for the optimal tuning parameter, which not only speeds up the computation but also improves the finite sample performance of the proposed Algorithm. Convergence rate of excess risk of the distributed adaptive NN classifier is investigated under various sub-sample size compositions. In particular, we show that when the sub-sample sizes are sufficiently large, the proposed classifier achieves the nearly optimal convergence rate. Effectiveness of the proposed approach is demonstrated through simulation studies as well as an empirical application to a real-world dataset.
We report a neural architecture search framework, BioNAS, that is tailored for biomedical researchers to easily build, evaluate, and uncover novel knowledge from interpretable deep learning models. The introduction of knowledge dissimilarity function s in BioNAS enables the joint optimization of predictive power and biological knowledge through searching architectures in a model space. By optimizing the consistency with existing knowledge, we demonstrate that BioNAS optimal models reveal novel knowledge in both simulated data and in real data of functional genomics. BioNAS provides a useful tool for domain experts to inject their prior belief into automated machine learning and therefore making deep learning easily accessible to practitioners. BioNAS is available at https://github.com/zj-zhang/BioNAS-pub.
Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive {em uncertainty}. Quantifying uncertaint y is especially critical in real-world settings, which often involve input distributions that are shifted from the training distribution due to a variety of factors including sample bias and non-stationarity. In such settings, well calibrated uncertainty estimates convey information about when a models output should (or should not) be trusted. Many probabilistic deep learning methods, including Bayesian-and non-Bayesian methods, have been proposed in the literature for quantifying predictive uncertainty, but to our knowledge there has not previously been a rigorous large-scale empirical comparison of these methods under dataset shift. We present a large-scale benchmark of existing state-of-the-art methods on classification problems and investigate the effect of dataset shift on accuracy and calibration. We find that traditional post-hoc calibration does indeed fall short, as do several other previous methods. However, some methods that marginalize over models give surprisingly strong results across a broad spectrum of tasks.
116 - K. Malarz 2014
In the paper random-site percolation thresholds for simple cubic lattice with sites neighborhoods containing next-next-next-nearest neighbors (4NN) are evaluated with Monte Carlo simulations. A recently proposed algorithm with low sampling for percol ation thresholds estimation [Bastas et al., arXiv:1411.5834] is implemented for the studies of the top-bottom wrapping probability. The obtained percolation thresholds are $p_C(text{4NN})=0.31160(12)$, $p_C(text{4NN+NN})=0.15040(12)$, $p_C(text{4NN+2NN})=0.15950(12)$, $p_C(text{4NN+3NN})=0.20490(12)$, $p_C(text{4NN+2NN+NN})=0.11440(12)$, $p_C(text{4NN+3NN+NN})=0.11920(12)$, $p_C(text{4NN+3NN+2NN})=0.11330(12)$, $p_C(text{4NN+3NN+2NN+NN})=0.10000(12)$, where 3NN, 2NN, NN stands for next-next-nearest neighbors, next-nearest neighbors, and nearest neighbors, respectively. As an SC lattice with 4NN neighbors may be mapped onto two independent interpenetrated SC lattices but with two times larger lattice constant the percolation threshold $p_C$(4NN) is exactly equal to $p_C$(NN). The simplified Bastas et al. method allows for reaching uncertainty of the percolation threshold value $p_C$ similar to those obtained with classical method but ten times faster.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا