ترغب بنشر مسار تعليمي؟ اضغط هنا

Algorithmic information theory

274   0   0.0 ( 0 )
 نشر من قبل Paul Vitanyi
 تاريخ النشر 2008
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Peter D. Grunwald




اسأل ChatGPT حول البحث

We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining `information. We discuss the extent to which Kolmogorovs and Shannons information theory have a common purpose, and where they are fundamentally different. We indicate how recent developments within the theory allow one to formally distinguish between `structural (meaningful) and `random information as measured by the Kolmogorov structure function, which leads to a mathematical formalization of Occams razor in inductive inference. We end by discussing some of the philosophical implications of the theory.



قيم البحث

اقرأ أيضاً

We propose a novel and theoretical model, blocked and hierarchical variational autoencoder (BHiVAE), to get better-disentangled representation. It is well known that information theory has an excellent explanatory meaning for the network, so we start to solve the disentanglement problem from the perspective of information theory. BHiVAE mainly comes from the information bottleneck theory and information maximization principle. Our main idea is that (1) Neurons block not only one neuron node is used to represent attribute, which can contain enough information; (2) Create a hierarchical structure with different attributes on different layers, so that we can segment the information within each layer to ensure that the final representation is disentangled. Furthermore, we present supervised and unsupervised BHiVAE, respectively, where the difference is mainly reflected in the separation of information between different blocks. In supervised BHiVAE, we utilize the label information as the standard to separate blocks. In unsupervised BHiVAE, without extra information, we use the Total Correlation (TC) measure to achieve independence, and we design a new prior distribution of the latent space to guide the representation learning. It also exhibits excellent disentanglement results in experiments and superior classification accuracy in representation learning.
The main contribution of this paper is to design an Information Retrieval (IR) technique based on Algorithmic Information Theory (using the Normalized Compression Distance- NCD), statistical techniques (outliers), and novel organization of data base structure. The paper shows how they can be integrated to retrieve information from generic databases using long (text-based) queries. Two important problems are analyzed in the paper. On the one hand, how to detect false positives when the distance among the documents is very low and there is actual similarity. On the other hand, we propose a way to structure a document database which similarities distance estimation depends on the length of the selected text. Finally, the experimental evaluations that have been carried out to study previous problems are shown.
Constraints on entropies are considered to be the laws of information theory. Even though the pursuit of their discovery has been a central theme of research in information theory, the algorithmic aspects of constraints on entropies remain largely un explored. Here, we initiate an investigation of decision problems about constraints on entropies by placing several different such problems into levels of the arithmetical hierarchy. We establish the following results on checking the validity over all almost-entropic functions: first, validity of a Boolean information constraint arising from a monotone Boolean formula is co-recursively enumerable; second, validity of tight conditional information constraints is in $Pi^0_3$. Furthermore, under some restrictions, validity of conditional information constraints with slack is in $Sigma^0_2$, and validity of information inequality constraints involving max is Turing equivalent to validity of information inequality constraints (with no max involved). We also prove that the classical implication problem for conditional independence statements is co-recursively enumerable.
88 - Ricky X. F. Chen 2016
This article serves as a brief introduction to the Shannon information theory. Concepts of information, Shannon entropy and channel capacity are mainly covered. All these concepts are developed in a totally combinatorial flavor. Some issues usually n ot addressed in the literature are discussed here as well. In particular, we show that it seems we can define channel capacity differently which allows us to potentially transmit more messages in a fixed sufficient long time duration. However, for a channel carrying a finite number of letters, the channel capacity unfortunately remains the same as the Shannon limit.
256 - Jian Ma , Zengqi Sun 2008
We prove that mutual information is actually negative copula entropy, based on which a method for mutual information estimation is proposed.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا