Do you want to publish a course? Click here

Information-theoretic characterization of the complete genotype-phenotype map of a complex pre-biotic world

137   0   0.0 ( 0 )
 Added by Christoph Adami
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

How information is encoded in bio-molecular sequences is difficult to quantify since such an analysis usually requires sampling an exponentially large genetic space. Here we show how information theory reveals both robust and compressed encodings in the largest complete genotype-phenotype map (over 5 trillion sequences) obtained to date.



rate research

Read More

Biological and artificial neural systems are composed of many local processors, and their capabilities depend upon the transfer function that relates each local processors outputs to its inputs. This paper uses a recent advance in the foundations of information theory to study the properties of local processors that use contextual input to amplify or attenuate transmission of information about their driving inputs. This advance enables the information transmitted by processors with two distinct inputs to be decomposed into those components unique to each input, that shared between the two inputs, and that which depends on both though it is in neither, i.e. synergy. The decompositions that we report here show that contextual modulation has information processing properties that contrast with those of all four simple arithmetic operators, that it can take various forms, and that the form used in our previous studies of artificial neural nets composed of local processors with both driving and contextual inputs is particularly well-suited to provide the distinctive capabilities of contextual modulation under a wide range of conditions. We argue that the decompositions reported here could be compared with those obtained from empirical neurobiological and psychophysical data under conditions thought to reflect contextual modulation. That would then shed new light on the underlying processes involved. Finally, we suggest that such decompositions could aid the design of context-sensitive machine learning algorithms.
In this paper, by the Hasse-Weil bound, we determine the necessary and sufficient condition on coefficients $a_1,a_2,a_3inmathbb{F}_{2^n}$ with $n=2m$ such that $f(x) = {x}^{3cdot2^m} + a_1x^{2^{m+1}+1} + a_2 x^{2^m+2} + a_3x^3$ is an APN function over $mathbb{F}_{2^n}$. Our result resolves the first half of an open problem by Carlet in International Workshop on the Arithmetic of Finite Fields, 83-107, 2014.
A renowned information-theoretic formula by Shannon expresses the mutual information rate of a white Gaussian channel with a stationary Gaussian input as an integral of a simple function of the power spectral density of the channel input. We give in this paper a rigorous yet elementary proof of this classical formula. As opposed to all the conventional approaches, which either rely on heavy mathematical machineries or have to resort to some external results, our proof, which hinges on a recently proven sampling theorem, is elementary and self-contained, only using some well-known facts from basic calculus and matrix theory.
Given a probability measure $mu$ over ${mathbb R}^n$, it is often useful to approximate it by the convex combination of a small number of probability measures, such that each component is close to a product measure. Recently, Ronen Eldan used a stochastic localization argument to prove a general decomposition result of this type. In Eldans theorem, the `number of components is characterized by the entropy of the mixture, and `closeness to product is characterized by the covariance matrix of each component. We present an elementary proof of Eldans theorem which makes use of an information theory (or estimation theory) interpretation. The proof is analogous to the one of an earlier decomposition result known as the `pinning lemma.
141 - Rami Atar , Neri Merhav 2014
A well-known technique in estimating probabilities of rare events in general and in information theory in particular (used, e.g., in the sphere-packing bound), is that of finding a reference probability measure under which the event of interest has probability of order one and estimating the probability in question by means of the Kullback-Leibler divergence. A method has recently been proposed in [2], that can be viewed as an extension of this idea in which the probability under the reference measure may itself be decaying exponentially, and the Renyi divergence is used instead. The purpose of this paper is to demonstrate the usefulness of this approach in various information-theoretic settings. For the problem of channel coding, we provide a general methodology for obtaining matched, mismatched and robust error exponent bounds, as well as new results in a variety of particular channel models. Other applications we address include rate-distortion coding and the problem of guessing.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا