ترغب بنشر مسار تعليمي؟ اضغط هنا

Feature selection in machine learning: Renyi min-entropy vs Shannon entropy

132   0   0.0 ( 0 )
 نشر من قبل Catuscia Palamidessi
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Feature selection, in the context of machine learning, is the process of separating the highly predictive feature from those that might be irrelevant or redundant. Information theory has been recognized as a useful concept for this task, as the prediction power stems from the correlation, i.e., the mutual information, between features and labels. Many algorithms for feature selection in the literature have adopted the Shannon-entropy-based mutual information. In this paper, we explore the possibility of using Renyi min-entropy instead. In particular, we propose an algorithm based on a notion of conditional Renyi min-entropy that has been recently adopted in the field of security and privacy, and which is strictly related to the Bayes error. We prove that in general the two approaches are incomparable, in the sense that we show that we can construct datasets on which the Renyi-based algorithm performs better than the corresponding Shannon-based one, and datasets on which the situation is reversed. In practice, however, when considering datasets of real data, it seems that the Renyi-based algorithm tends to outperform the other one. We have effectuate several experiments on the BASEHOCK, SEMEION, and GISETTE datasets, and in all of them we have indeed observed that the Renyi-based algorithm gives better results.


قيم البحث

اقرأ أيضاً

100 - Chun-Wang Ma , Yu-Gang Ma 2018
The general idea of information entropy provided by C.E. Shannon hangs over everything we do and can be applied to a great variety of problems once the connection between a distribution and the quantities of interest is found. The Shannon information entropy essentially quantify the information of a quantity with its specific distribution, for which the information entropy based methods have been deeply developed in many scientific areas including physics. The dynamical properties of heavy-ion collisions (HICs) process make it difficult and complex to study the nuclear matter and its evolution, for which Shannon information entropy theory can provide new methods and observables to understand the physical phenomena both theoretically and experimentally. To better understand the processes of HICs, the main characteristics of typical models, including the quantum molecular dynamics models, thermodynamics models, and statistical models, etc, are briefly introduced. The typical applications of Shannon information theory in HICs are collected, which cover the chaotic behavior in branching process of hadron collisions, the liquid-gas phase transition in HICs, and the isobaric difference scaling phenomenon for intermediate mass fragments produced in HICs of neutron-rich systems. Even though the present applications in heavy-ion collision physics are still relatively simple, it would shed light on key questions we are seeking for. It is suggested to further develop the information entropy methods in nuclear reactions models, as well as to develop new analysis methods to study the properties of nuclear matters in HICs, especially the evolution of dynamics system.
We establish a discrete analog of the Renyi entropy comparison due to Bobkov and Madiman. For log-concave variables on the integers, the min entropy is within log e of the usual Shannon entropy. Additionally we investigate the entropic Rogers-Shephar d inequality studied by Madiman and Kontoyannis, and establish a sharp Renyi version for certain parameters in both the continuous and discrete cases
Recently a new quantum generalization of the Renyi divergence and the corresponding conditional Renyi entropies was proposed. Here we report on a surprising relation between conditional Renyi entropies based on this new generalization and conditional Renyi entropies based on the quantum relative Renyi entropy that was used in previous literature. Our result generalizes the well-known duality relation H(A|B) + H(A|C) = 0 of the conditional von Neumann entropy for tripartite pure states to Renyi entropies of two different kinds. As a direct application, we prove a collection of inequalities that relate different conditional Renyi entropies and derive a new entropic uncertainty relation.
We investigate the Renyi entropy of independent sums of integer valued random variables through Fourier theoretic means, and give sharp comparisons between the variance and the Renyi entropy, for Poisson-Bernoulli variables. As applications we prove that a discrete ``min-entropy power is super additive on independent variables up to a universal constant, and give new bounds on an entropic generalization of the Littlewood-Offord problem that are sharp in the ``Poisson regime.
Privacy preserving in machine learning is a crucial issue in industry informatics since data used for training in industries usually contain sensitive information. Existing differentially private machine learning algorithms have not considered the im pact of data correlation, which may lead to more privacy leakage than expected in industrial applications. For example, data collected for traffic monitoring may contain some correlated records due to temporal correlation or user correlation. To fill this gap, we propose a correlation reduction scheme with differentially private feature selection considering the issue of privacy loss when data have correlation in machine learning tasks. %The key to the proposed scheme is to describe the data correlation and select features which leads to less data correlation across the whole dataset. The proposed scheme involves five steps with the goal of managing the extent of data correlation, preserving the privacy, and supporting accuracy in the prediction results. In this way, the impact of data correlation is relieved with the proposed feature selection scheme, and moreover, the privacy issue of data correlation in learning is guaranteed. The proposed method can be widely used in machine learning algorithms which provide services in industrial areas. Experiments show that the proposed scheme can produce better prediction results with machine learning tasks and fewer mean square errors for data queries compared to existing schemes.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا