ترغب بنشر مسار تعليمي؟ اضغط هنا

Neural Neighborhood Encoding for Classification

71   0   0.0 ( 0 )
 نشر من قبل Parikshit Ram
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Inspired by the fruit-fly olfactory circuit, the Fly Bloom Filter [Dasgupta et al., 2018] is able to efficiently summarize the data with a single pass and has been used for novelty detection. We propose a new classifier (for binary and multi-class classification) that effectively encodes the different local neighborhoods for each class with a per-class Fly Bloom Filter. The inference on test data requires an efficient {tt FlyHash} [Dasgupta, et al., 2017] operation followed by a high-dimensional, but {em sparse}, dot product with the per-class Bloom Filters. The learning is trivially parallelizable. On the theoretical side, we establish conditions under which the prediction of our proposed classifier on any test example agrees with the prediction of the nearest neighbor classifier with high probability. We extensively evaluate our proposed scheme with over $50$ data sets of varied data dimensionality to demonstrate that the predictive performance of our proposed neuroscience inspired classifier is competitive the the nearest-neighbor classifiers and other single-pass classifiers.

قيم البحث

اقرأ أيضاً

176 - Harjot Singh Parmar 2019
In this paper we propose the use of multiple local binary patterns(LBPs) to effectively classify land use images. We use the UC Merced 21 class land use image dataset. Task is challenging for classification as the dataset contains intra class variabi lity and inter class similarities. Our proposed method of using multi-neighborhood LBPs combined with nearest neighbor classifier is able to achieve an accuracy of 77.76%. Further class wise analysis is conducted and suitable suggestion are made for further improvements to classification accuracy.
284 - Chengsheng Mao , Bin Hu , Lei Chen 2018
The k-nearest-neighbor method performs classification tasks for a query sample based on the information contained in its neighborhood. Previous studies into the k-nearest-neighbor algorithm usually achieved the decision value for a class by combining the support of each sample in the neighborhood. They have generally considered the nearest neighbors separately, and potentially integral neighborhood information important for classification was lost, e.g. the distribution information. This article proposes a novel local learning method that organizes the information in the neighborhood through local distribution. In the proposed method, additional distribution information in the neighborhood is estimated and then organized; the classification decision is made based on maximum posterior probability which is estimated from the local distribution in the neighborhood. Additionally, based on the local distribution, we generate a generalized local classification form that can be effectively applied to various datasets through tuning the parameters. We use both synthetic and real datasets to evaluate the classification performance of the proposed method; the experimental results demonstrate the dimensional scalability, efficiency, effectiveness and robustness of the proposed method compared to some other state-of-the-art classifiers. The results indicate that the proposed method is effective and promising in a broad range of domains.
In this paper, we consider recommender systems with side information in the form of graphs. Existing collaborative filtering algorithms mainly utilize only immediate neighborhood information and have a hard time taking advantage of deeper neighborhoo ds beyond 1-2 hops. The main caveat of exploiting deeper graph information is the rapidly growing time and space complexity when incorporating information from these neighborhoods. In this paper, we propose using Graph DNA, a novel Deep Neighborhood Aware graph encoding algorithm, for exploiting deeper neighborhood information. DNA encoding computes approximate deep neighborhood information in linear time using Bloom filters, a space-efficient probabilistic data structure and results in a per-node encoding that is logarithmic in the number of nodes in the graph. It can be used in conjunction with both feature-based and graph-regularization-based collaborative filtering algorithms. Graph DNA has the advantages of being memory and time efficient and providing additional regularization when compared to directly using higher order graph information. We conduct experiments on real-world datasets, showing graph DNA can be easily used with 4 popular collaborative filtering algorithms and consistently leads to a performance boost with little computational and memory overhead.
The deployment of machine learning algorithms on resource-constrained edge devices is an important challenge from both theoretical and applied points of view. In this article, we focus on resource-efficient randomly connected neural networks known as Random Vector Functional Link (RVFL) networks since their simple design and extremely fast training time make them very attractive for solving many applied classification tasks. We propose to represent input features via the density-based encoding known in the area of stochastic computing and use the operations of binding and bundling from the area of hyperdimensional computing for obtaining the activations of the hidden neurons. Using a collection of 121 real-world datasets from the UCI Machine Learning Repository, we empirically show that the proposed approach demonstrates higher average accuracy than the conventional RVFL. We also demonstrate that it is possible to represent the readout matrix using only integers in a limited range with minimal loss in the accuracy. In this case, the proposed approach operates only on small n-bits integers, which results in a computationally efficient architecture. Finally, through hardware FPGA implementations, we show that such an approach consumes approximately eleven times less energy than that of the conventional RVFL.
Model selection consists in comparing several candidate models according to a metric to be optimized. The process often involves a grid search, or such, and cross-validation, which can be time consuming, as well as not providing much information abou t the dataset itself. In this paper we propose a method to reduce the scope of exploration needed for the task. The idea is to quantify how much it would be necessary to depart from trained instances of a given family, reference models (RMs) carrying `rigid decision boundaries (e.g. decision trees), so as to obtain an equivalent or better model. In our approach, this is realized by progressively relaxing the decision boundaries of the initial decision trees (the RMs) as long as this is beneficial in terms of performance measured on an analyzed dataset. More specifically, this relaxation is performed by making use of a neural decision tree, which is a neural network built from DTs. The final model produced by our method carries non-linear decision boundaries. Measuring the performance of the final model, and its agreement to its seeding RM can help the user to figure out on which family of models he should focus on.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا