ترغب بنشر مسار تعليمي؟ اضغط هنا

A software for learning Information Theory basics with emphasis on Entropy of Spanish

49   0   0.0 ( 0 )
 نشر من قبل Fabio G. Guerrero Moreno
 تاريخ النشر 2007
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, a tutorial software to learn Information Theory basics in a practical way is reported. The software, called IT-tutor-UV, makes use of a modern existing Spanish corpus for the modeling of the source. Both the source and the channel coding are also included in this educational tool as part of the learning experience. Entropy values of the Spanish language obtained with the IT-tutor-UV are discussed and compared to others that were previously calculated under limited conditions.



قيم البحث

اقرأ أيضاً

We introduce an axiomatic approach to entropies and relative entropies that relies only on minimal information-theoretic axioms, namely monotonicity under mixing and data-processing as well as additivity for product distributions. We find that these axioms induce sufficient structure to establish continuity in the interior of the probability simplex and meaningful upper and lower bounds, e.g., we find that every relative entropy must lie between the Renyi divergences of order $0$ and $infty$. We further show simple conditions for positive definiteness of such relative entropies and a characterisation in term of a variant of relative trumping. Our main result is a one-to-one correspondence between entropies and relative entropies.
We offer a new approach to the information decomposition problem in information theory: given a target random variable co-distributed with multiple source variables, how can we decompose the mutual information into a sum of non-negative terms that qu antify the contributions of each random variable, not only individually but also in combination? We derive our composition from cooperative game theory. It can be seen as assigning a fair share of the mutual information to each combination of the source variables. Our decomposition is based on a different lattice from the usual partial information decomposition (PID) approach, and as a consequence our decomposition has a smaller number of terms: it has analogs of the synergy and unique information terms, but lacks terms corresponding to redundancy. Because of this, it is able to obey equivalents of the axioms known as local positivity and identity, which cannot be simultaneously satisfied by a PID measure.
In an effort to develop the foundations for a non-stochastic theory of information, the notion of $delta$-mutual information between uncertain variables is introduced as a generalization of Nairs non-stochastic information functional. Several propert ies of this new quantity are illustrated, and used to prove a channel coding theorem in a non-stochastic setting. Namely, it is shown that the largest $delta$-mutual information between received and transmitted codewords over $epsilon$-noise channels equals the $(epsilon, delta)$-capacity. This notion of capacity generalizes the Kolmogorov $epsilon$-capacity to packing sets of overlap at most $delta$, and is a variation of a previous definition proposed by one of the authors. Results are then extended to more general noise models, and to non-stochastic, memoryless, stationary channels. Finally, sufficient conditions are established for the factorization of the $delta$-mutual information and to obtain a single letter capacity expression. Compared to previous non-stochastic approaches, the presented theory admits the possibility of decoding errors as in Shannons probabilistic setting, while retaining a worst-case, non-stochastic character.
The subject of this paper is the long-standing open problem of developing a general capacity theory for wireless networks, particularly a theory capable of describing the fundamental performance limits of mobile ad hoc networks (MANETs). A MANET is a peer-to-peer network with no pre-existing infrastructure. MANETs are the most general wireless networks, with single-hop, relay, interference, mesh, and star networks comprising special cases. The lack of a MANET capacity theory has stunted the development and commercialization of many types of wireless networks, including emergency, military, sensor, and community mesh networks. Information theory, which has been vital for links and centralized networks, has not been successfully applied to decentralized wireless networks. Even if this was accomplished, for such a theory to truly characterize the limits of deployed MANETs it must overcome three key roadblocks. First, most current capacity results rely on the allowance of unbounded delay and reliability. Second, spatial and timescale decompositions have not yet been developed for optimally modeling the spatial and temporal dynamics of wireless networks. Third, a useful network capacity theory must integrate rather than ignore the important role of overhead messaging and feedback. This paper describes some of the shifts in thinking that may be needed to overcome these roadblocks and develop a more general theory that we refer to as non-equilibrium information theory.
Using a sharp version of the reverse Young inequality, and a Renyi entropy comparison result due to Fradelizi, Madiman, and Wang, the authors are able to derive Renyi entropy power inequalities for log-concave random vectors when Renyi parameters bel ong to $(0,1)$. Furthermore, the estimates are shown to be sharp up to absolute constants.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا