Do you want to publish a course? Click here

Non-Stochastic Information Theory

103   0   0.0 ( 0 )
 Added by Anshuka Rangi
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

In an effort to develop the foundations for a non-stochastic theory of information, the notion of $delta$-mutual information between uncertain variables is introduced as a generalization of Nairs non-stochastic information functional. Several properties of this new quantity are illustrated, and used to prove a channel coding theorem in a non-stochastic setting. Namely, it is shown that the largest $delta$-mutual information between received and transmitted codewords over $epsilon$-noise channels equals the $(epsilon, delta)$-capacity. This notion of capacity generalizes the Kolmogorov $epsilon$-capacity to packing sets of overlap at most $delta$, and is a variation of a previous definition proposed by one of the authors. Results are then extended to more general noise models, and to non-stochastic, memoryless, stationary channels. Finally, sufficient conditions are established for the factorization of the $delta$-mutual information and to obtain a single letter capacity expression. Compared to previous non-stochastic approaches, the presented theory admits the possibility of decoding errors as in Shannons probabilistic setting, while retaining a worst-case, non-stochastic character.



rate research

Read More

In 1959, Renyi proposed the information dimension and the $d$-dimensional entropy to measure the information content of general random variables. This paper proposes a generalization of information dimension to stochastic processes by defining the information dimension rate as the entropy rate of the uniformly-quantized stochastic process divided by minus the logarithm of the quantizer step size $1/m$ in the limit as $mtoinfty$. It is demonstrated that the information dimension rate coincides with the rate-distortion dimension, defined as twice the rate-distortion function $R(D)$ of the stochastic process divided by $-log(D)$ in the limit as $Ddownarrow 0$. It is further shown that, among all multivariate stationary processes with a given (matrix-valued) spectral distribution function (SDF), the Gaussian process has the largest information dimension rate, and that the information dimension rate of multivariate stationary Gaussian processes is given by the average rank of the derivative of the SDF. The presented results reveal that the fundamental limits of almost zero-distortion recovery via compressible signal pursuit and almost lossless analog compression are different in general.
Given a probability measure $mu$ over ${mathbb R}^n$, it is often useful to approximate it by the convex combination of a small number of probability measures, such that each component is close to a product measure. Recently, Ronen Eldan used a stochastic localization argument to prove a general decomposition result of this type. In Eldans theorem, the `number of components is characterized by the entropy of the mixture, and `closeness to product is characterized by the covariance matrix of each component. We present an elementary proof of Eldans theorem which makes use of an information theory (or estimation theory) interpretation. The proof is analogous to the one of an earlier decomposition result known as the `pinning lemma.
We offer a new approach to the information decomposition problem in information theory: given a target random variable co-distributed with multiple source variables, how can we decompose the mutual information into a sum of non-negative terms that quantify the contributions of each random variable, not only individually but also in combination? We derive our composition from cooperative game theory. It can be seen as assigning a fair share of the mutual information to each combination of the source variables. Our decomposition is based on a different lattice from the usual partial information decomposition (PID) approach, and as a consequence our decomposition has a smaller number of terms: it has analogs of the synergy and unique information terms, but lacks terms corresponding to redundancy. Because of this, it is able to obey equivalents of the axioms known as local positivity and identity, which cannot be simultaneously satisfied by a PID measure.
The subject of this paper is the long-standing open problem of developing a general capacity theory for wireless networks, particularly a theory capable of describing the fundamental performance limits of mobile ad hoc networks (MANETs). A MANET is a peer-to-peer network with no pre-existing infrastructure. MANETs are the most general wireless networks, with single-hop, relay, interference, mesh, and star networks comprising special cases. The lack of a MANET capacity theory has stunted the development and commercialization of many types of wireless networks, including emergency, military, sensor, and community mesh networks. Information theory, which has been vital for links and centralized networks, has not been successfully applied to decentralized wireless networks. Even if this was accomplished, for such a theory to truly characterize the limits of deployed MANETs it must overcome three key roadblocks. First, most current capacity results rely on the allowance of unbounded delay and reliability. Second, spatial and timescale decompositions have not yet been developed for optimally modeling the spatial and temporal dynamics of wireless networks. Third, a useful network capacity theory must integrate rather than ignore the important role of overhead messaging and feedback. This paper describes some of the shifts in thinking that may be needed to overcome these roadblocks and develop a more general theory that we refer to as non-equilibrium information theory.
Any theory amenable to scientific inquiry must have testable consequences. This minimal criterion is uniquely challenging for the study of consciousness, as we do not know if it is possible to confirm via observation from the outside whether or not a physical system knows what it feels like to have an inside - a challenge referred to as the hard problem of consciousness. To arrive at a theory of consciousness, the hard problem has motivated the development of phenomenological approaches that adopt assumptions of what properties consciousness has based on first-hand experience and, from these, derive the physical processes that give rise to these properties. A leading theory adopting this approach is Integrated Information Theory (IIT), which assumes our subjective experience is a unified whole, subsequently yielding a requirement for physical feedback as a necessary condition for consciousness. Here, we develop a mathematical framework to assess the validity of this assumption by testing it in the context of isomorphic physical systems with and without feedback. The isomorphism allows us to isolate changes in $Phi$ without affecting the size or functionality of the original system. Indeed, we show that the only mathematical difference between a conscious system with $Phi>0$ and an isomorphic philosophical zombies with $Phi=0$ is a permutation of the binary labels used to internally represent functional states. This implies $Phi$ is sensitive to functionally arbitrary aspects of a particular labeling scheme, with no clear justification in terms of phenomenological differences. In light of this, we argue any quantitative theory of consciousness, including IIT, should be invariant under isomorphisms if it is to avoid the existence of isomorphic philosophical zombies and the epistemological problems they pose.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا