ترغب بنشر مسار تعليمي؟ اضغط هنا

An Elementary Proof of a Classical Information-Theoretic Formula

103   0   0.0 ( 0 )
 نشر من قبل Guangyue Han
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

A renowned information-theoretic formula by Shannon expresses the mutual information rate of a white Gaussian channel with a stationary Gaussian input as an integral of a simple function of the power spectral density of the channel input. We give in this paper a rigorous yet elementary proof of this classical formula. As opposed to all the conventional approaches, which either rely on heavy mathematical machineries or have to resort to some external results, our proof, which hinges on a recently proven sampling theorem, is elementary and self-contained, only using some well-known facts from basic calculus and matrix theory.



قيم البحث

اقرأ أيضاً

A finite form of de Finettis representation theorem is established using elementary information-theoretic tools: The distribution of the first $k$ random variables in an exchangeable binary vector of length $ngeq k$ is close to a mixture of product d istributions. Closeness is measured in terms of the relative entropy and an explicit bound is provided.
Given a probability measure $mu$ over ${mathbb R}^n$, it is often useful to approximate it by the convex combination of a small number of probability measures, such that each component is close to a product measure. Recently, Ronen Eldan used a stoch astic localization argument to prove a general decomposition result of this type. In Eldans theorem, the `number of components is characterized by the entropy of the mixture, and `closeness to product is characterized by the covariance matrix of each component. We present an elementary proof of Eldans theorem which makes use of an information theory (or estimation theory) interpretation. The proof is analogous to the one of an earlier decomposition result known as the `pinning lemma.
A key practical constraint on the design of Hybrid automatic repeat request (HARQ) schemes is the size of the on-chip buffer that is available at the receiver to store previously received packets. In fact, in modern wireless standards such as LTE and LTE-A, the HARQ buffer size is one of the main drivers of the modem area and power consumption. This has recently highlighted the importance of HARQ buffer management, that is, of the use of buffer-aware transmission schemes and of advanced compression policies for the storage of received data. This work investigates HARQ buffer management by leveraging information-theoretic achievability arguments based on random coding. Specifically, standard HARQ schemes, namely Type-I, Chase Combining and Incremental Redundancy, are first studied under the assumption of a finite-capacity HARQ buffer by considering both coded modulation, via Gaussian signaling, and Bit Interleaved Coded Modulation (BICM). The analysis sheds light on the impact of different compression strategies, namely the conventional compression log-likelihood ratios and the direct digitization of baseband signals, on the throughput. Then, coding strategies based on layered modulation and optimized coding blocklength are investigated, highlighting the benefits of HARQ buffer-aware transmission schemes. The optimization of baseband compression for multiple-antenna links is also studied, demonstrating the optimality of a transform coding approach.
A communication setup is considered where a transmitter wishes to convey a message to a receiver and simultaneously estimate the state of that receiver through a common waveform. The state is estimated at the transmitter by means of generalized feedb ack, i.e., a strictly causal channel output, and the known waveform. The scenario at hand is motivated by joint radar and communication, which aims to co-design radar sensing and communication over shared spectrum and hardware. For the case of memoryless single receiver channels with i.i.d. time-varying state sequences, we fully characterize the capacity-distortion tradeoff, defined as the largest achievable rate below which a message can be conveyed reliably while satisfying some distortion constraints on state sensing. We propose a numerical method to compute the optimal input that achieves the capacity-distortion tradeoff. Then, we address memoryless state-dependent broadcast channels (BCs). For physically degraded BCs with i.i.d. time-varying state sequences, we characterize the capacity-distortion tradeoff region as a rather straightforward extension of single receiver channels. For general BCs, we provide inner and outer bounds on the capacity-distortion region, as well as a sufficient condition when this capacity-distortion region is equal to the product of the capacity region and the set of achievable distortions. A number of illustrative examples demonstrates that the optimal co-design schemes outperform conventional schemes that split the resources between sensing and communication.
A basic information theoretic model for summarization is formulated. Here summarization is considered as the process of taking a report of $v$ binary objects, and producing from it a $j$ element subset that captures most of the important features of the original report, with importance being defined via an arbitrary set function endemic to the model. The loss of information is then measured by a weight average of variational distances, which we term the semantic loss. Our results include both cases where the probability distribution generating the $v$-length reports are known and unknown. In the case where it is known, our results demonstrate how to construct summarizers which minimize the semantic loss. For the case where the probability distribution is unknown, we show how to construct summarizers whose semantic loss when averaged uniformly over all possible distribution converges to the minimum.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا