ترغب بنشر مسار تعليمي؟ اضغط هنا

An Integral Representation of the Logarithmic Function with Applications in Information Theory

76   0   0.0 ( 0 )
 نشر من قبل Igal Sason
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We explore a well-known integral representation of the logarithmic function, and demonstrate its usefulness in obtaining compact, easily-computable exact formulas for quantities that involve expectations and higher moments of the logarithm of a positive random variable (or the logarithm of a sum of positive random variables). The integral representation of the logarithm is proved useful in a variety of information-theoretic applications, including universal lossless data compression, entropy and differential entropy evaluations, and the calculation of the ergodic capacity of the single-input, multiple-output (SIMO) Gaussian channel with random parameters (known to both transmitter and receiver). This integral representation and its variants are anticipated to serve as a useful tool in additional applications, as a rigorous alternative to the popular (but non-rigorous) replica method (at least in some situations).

قيم البحث

اقرأ أيضاً

96 - Rami Atar , Neri Merhav 2014
A well-known technique in estimating probabilities of rare events in general and in information theory in particular (used, e.g., in the sphere-packing bound), is that of finding a reference probability measure under which the event of interest has p robability of order one and estimating the probability in question by means of the Kullback-Leibler divergence. A method has recently been proposed in [2], that can be viewed as an extension of this idea in which the probability under the reference measure may itself be decaying exponentially, and the Renyi divergence is used instead. The purpose of this paper is to demonstrate the usefulness of this approach in various information-theoretic settings. For the problem of channel coding, we provide a general methodology for obtaining matched, mismatched and robust error exponent bounds, as well as new results in a variety of particular channel models. Other applications we address include rate-distortion coding and the problem of guessing.
The objective of this paper is to further investigate various applications of information Nonanticipative Rate Distortion Function (NRDF) by discussing two working examples, the Binary Symmetric Markov Source with parameter $p$ (BSMS($p$)) with Hammi ng distance distortion, and the multidimensional partially observed Gaussian-Markov source. For the BSMS($p$), we give the solution to the NRDF, and we use it to compute the Rate Loss (RL) of causal codes with respect to noncausal codes. For the multidimensional Gaussian-Markov source, we give the solution to the NRDF, we show its operational meaning via joint source-channel matching over a vector of parallel Gaussian channels, and we compute the RL of causal and zero-delay codes with respect to noncausal codes.
359 - Qianli Zhou , Yong Deng 2020
For a certain moment, the information volume represented in a probability space can be accurately measured by Shannon entropy. But in real life, the results of things usually change over time, and the prediction of the information volume contained in the future is still an open question. Deng entropy proposed by Deng in recent years is widely applied on measuring the uncertainty, but its physical explanation is controversial. In this paper, we give Deng entropy a new explanation based on the fractal idea, and proposed its generalization called time fractal-based (TFB) entropy. The TFB entropy is recognized as predicting the uncertainty over a period of time by splitting times, and its maximum value, called higher order information volume of mass function (HOIVMF), can express more uncertain information than all of existing methods.
98 - Neri Merhav , Igal Sason 2020
This work is an extension of our earlier article, where a well-known integral representation of the logarithmic function was explored, and was accompanied with demonstrations of its usefulness in obtaining compact, easily-calculable, exact formulas f or quantities that involve expectations of the logarithm of a positive random variable. Here, in the same spirit, we derive an exact integral representation (in one or two dimensions) of the moment of a nonnegative random variable, or the sum of such independent random variables, where the moment order is a general positive noninteger real (also known as fractional moments). The proposed formula is applied to a variety of examples with an information-theoretic motivation, and it is shown how it facilitates their numerical evaluations. In particular, when applied to the calculation of a moment of the sum of a large number, $n$, of nonnegative random variables, it is clear that integration over one or two dimensions, as suggested by our proposed integral representation, is significantly easier than the alternative of integrating over $n$ dimensions, as needed in the direct calculation of the desired moment.
206 - Maxim Raginsky , Igal Sason 2015
During the last two decades, concentration of measure has been a subject of various exciting developments in convex geometry, functional analysis, statistical physics, high-dimensional statistics, probability theory, information theory, communication s and coding theory, computer science, and learning theory. One common theme which emerges in these fields is probabilistic stability: complicated, nonlinear functions of a large number of independent or weakly dependent random variables often tend to concentrate sharply around their expected values. Information theory plays a key role in the derivation of concentration inequalities. Indeed, both the entropy method and the approach based on transportation-cost inequalities are two major information-theoretic paths toward proving concentration. This brief survey is based on a recent monograph of the authors in the Foundations and Trends in Communications and Information Theory (online available at http://arxiv.org/pdf/1212.4663v8.pdf), and a tutorial given by the authors at ISIT 2015. It introduces information theorists to three main techniques for deriving concentration inequalities: the martingale method, the entropy method, and the transportation-cost inequalities. Some applications in information theory, communications, and coding theory are used to illustrate the main ideas.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا