ترغب بنشر مسار تعليمي؟ اضغط هنا

Boolean functions: noise stability, non-interactive correlation distillation, and mutual information

61   0   0.0 ( 0 )
 نشر من قبل Jiange Li
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Let $T_{epsilon}$ be the noise operator acting on Boolean functions $f:{0, 1}^nto {0, 1}$, where $epsilonin[0, 1/2]$ is the noise parameter. Given $alpha>1$ and fixed mean $mathbb{E} f$, which Boolean function $f$ has the largest $alpha$-th moment $mathbb{E}(T_epsilon f)^alpha$? This question has close connections with noise stability of Boolean functions, the problem of non-interactive correlation distillation, and Courtade-Kumars conjecture on the most informative Boolean function. In this paper, we characterize maximizers in some extremal settings, such as low noise ($epsilon=epsilon(n)$ is close to 0), high noise ($epsilon=epsilon(n)$ is close to 1/2), as well as when $alpha=alpha(n)$ is large. Analogous results are also established in more general contexts, such as Boolean functions defined on discrete torus $(mathbb{Z}/pmathbb{Z})^n$ and the problem of noise stability in a tree model.



قيم البحث

اقرأ أيضاً

We propose a method for learning Markov network structures for continuous data without invoking any assumptions about the distribution of the variables. The method makes use of previous work on a non-parametric estimator for mutual information which is used to create a non-parametric test for multivariate conditional independence. This independence test is then combined with an efficient constraint-based algorithm for learning the graph structure. The performance of the method is evaluated on several synthetic data sets and it is shown to learn considerably more accurate structures than competing methods when the dependencies between the variables involve non-linearities.
We propose a new estimator to measure directed dependencies in time series. The dimensionality of data is first reduced using a new non-uniform embedding technique, where the variables are ranked according to a weighted sum of the amount of new infor mation and improvement of the prediction accuracy provided by the variables. Then, using a greedy approach, the most informative subsets are selected in an iterative way. The algorithm terminates, when the highest ranked variable is not able to significantly improve the accuracy of the prediction as compared to that obtained using the existing selected subsets. In a simulation study, we compare our estimator to existing state-of-the-art methods at different data lengths and directed dependencies strengths. It is demonstrated that the proposed estimator has a significantly higher accuracy than that of existing methods, especially for the difficult case, where the data is highly correlated and coupled. Moreover, we show its false detection of directed dependencies due to instantaneous couplings effect is lower than that of existing measures. We also show applicability of the proposed estimator on real intracranial electroencephalography data.
Conditional Mutual Information (CMI) is a measure of conditional dependence between random variables X and Y, given another random variable Z. It can be used to quantify conditional dependence among variables in many data-driven inference problems su ch as graphical models, causal learning, feature selection and time-series analysis. While k-nearest neighbor (kNN) based estimators as well as kernel-based methods have been widely used for CMI estimation, they suffer severely from the curse of dimensionality. In this paper, we leverage advances in classifiers and generative models to design methods for CMI estimation. Specifically, we introduce an estimator for KL-Divergence based on the likelihood ratio by training a classifier to distinguish the observed joint distribution from the product distribution. We then show how to construct several CMI estimators using this basic divergence estimator by drawing ideas from conditional generative models. We demonstrate that the estimates from our proposed approaches do not degrade in performance with increasing dimension and obtain significant improvement over the widely used KSG estimator. Finally, as an application of accurate CMI estimation, we use our best estimator for conditional independence testing and achieve superior performance than the state-of-the-art tester on both simulated and real data-sets.
Mutual information is a widely-used information theoretic measure to quantify the amount of association between variables. It is used extensively in many applications such as image registration, diagnosis of failures in electrical machines, pattern r ecognition, data mining and tests of independence. The main goal of this paper is to provide an efficient estimator of the mutual information based on the approach of Al Labadi et. al. (2021). The estimator is explored through various examples and is compared to its frequentist counterpart due to Berrett et al. (2019). The results show the good performance of the procedure by having a smaller mean squared error.
89 - Sk Sazim , Pankaj Agrawal 2016
We introduce a new information theoretic measure of quantum correlations for multiparticle systems. We use a form of multivariate mutual information -- the interaction information and generalize it to multiparticle quantum systems. There are a number of different possible generalizations. We consider two of them. One of them is related to the notion of quantum discord and the other to the concept of quantum dissension. This new measure, called dissension vector, is a set of numbers -- quantumness vector. This can be thought of as a fine-grained measure, as opposed to measures that quantify some average quantum properties of a system. These quantities quantify/characterize the correlations present in multiparticle states. We consider some multiqubit states and find that these quantities are responsive to different aspects of quantumness, and correlations present in a state. We find that different dissension vectors can track the correlations (both classical and quantum), or quantumness only. As physical applications, we find that these vectors might be useful in several information processing tasks. We consider the role of dissension vectors -- (a) in deciding the security of BB84 protocol against an eavesdropper and (b) in determining the possible role of correlations in the performance of Grover search algorithm. Especially, in the Grover search algorithm, we find that dissension vectors can detect the correlations and show the maximum correlations when one expects.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا