ترغب بنشر مسار تعليمي؟ اضغط هنا

Strong Converses Are Just Edge Removal Properties

105   0   0.0 ( 0 )
 نشر من قبل Oliver Kosut
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper explores the relationship between two ideas in network information theory: edge removal and strong converses. Edge removal properties state that if an edge of small capacity is removed from a network, the capacity region does not change too much. Strong converses state that, for rates outside the capacity region, the probability of error converges to 1 as the blocklength goes to infinity. Various notions of edge removal and strong converse are defined, depending on how edge capacity and error probability scale with blocklength, and relations between them are proved. Each class of strong converse implies a specific class of edge removal. The opposite directions are proved for deterministic networks. Furthermore, a technique based on a novel, causal version of the blowing-up lemma is used to prove that for discrete memoryless networks, the weak edge removal property--that the capacity region changes continuously as the capacity of an edge vanishes--is equivalent to the exponentially strong converse--that outside the capacity region, the probability of error goes to 1 exponentially fast. This result is used to prove exponentially strong converses for several examples, including the discrete 2-user interference channel with strong interference, with only a small variation from traditional weak converse proofs.



قيم البحث

اقرأ أيضاً

The edge-removal problem asks whether the removal of a $lambda$-capacity edge from a given network can decrease the communication rate between source-terminal pairs by more than $lambda$. In this short manuscript, we prove that for undirected network s, removing a $lambda$-capacity edge decreases the rate by $O(lambda)$. Through previously known reductive arguments, here newly applied to undirected networks, our result implies that the zero-error capacity region of an undirected network equals its vanishing-error capacity region. Whether it is possible to prove similar results for directed networks remains an open question.
We study the infimum of the best constant in a functional inequality, the Brascamp-Lieb-like inequality, over auxiliary measures within a neighborhood of a product distribution. In the finite alphabet and the Gaussian cases, such an infimum converges to the best constant in a mutual information inequality. Implications for strong converse properties of two common randomness (CR) generation problems are discussed. In particular, we prove the strong converse property of the rate region for the omniscient helper CR generation problem in the discrete and the Gaussian cases. The latter case is perhaps the first instance of a strong converse for a continuous source when the rate region involves auxiliary random variables.
The edge removal problem studies the loss in network coding rates that results when a network communication edge is removed from a given network. It is known, for example, that in networks restricted to linear coding schemes and networks restricted t o Abelian group codes, removing an edge e* with capacity Re* reduces the achievable rate on each source by no more than Re*. In this work, we seek to uncover larger families of encoding functions for which the edge removal statement holds. We take a local perspective: instead of requiring that all network encoding functions satisfy certain restrictions (e.g., linearity), we limit only the function carried on the removed edge e*. Our central results give sufficient conditions on the function carried by edge e* in the code used to achieve a particular rate vector under which we can demonstrate the achievability of a related rate vector once e* is removed.
Improved lower bounds on the average and the worst-case rate-memory tradeoffs for the Maddah-Ali&Niesen coded caching scenario are presented. For any number of users and files and for arbitrary cache sizes, the multiplicative gap between the exact ra te-memory tradeoff and the new lower bound is less than 2.315 in the worst-case scenario and less than 2.507 in the average-case scenario.
The sparsity and compressibility of finite-dimensional signals are of great interest in fields such as compressed sensing. The notion of compressibility is also extended to infinite sequences of i.i.d. or ergodic random variables based on the observe d error in their nonlinear k-term approximation. In this work, we use the entropy measure to study the compressibility of continuous-domain innovation processes (alternatively known as white noise). Specifically, we define such a measure as the entropy limit of the doubly quantized (time and amplitude) process. This provides a tool to compare the compressibility of various innovation processes. It also allows us to identify an analogue of the concept of entropy dimension which was originally defined by Renyi for random variables. Particular attention is given to stable and impulsive Poisson innovation processes. Here, our results recognize Poisson innovations as the more compressible ones with an entropy measure far below that of stable innovations. While this result departs from the previous knowledge regarding the compressibility of fat-tailed distributions, our entropy measure ranks stable innovations according to their tail decay.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا