ﻻ يوجد ملخص باللغة العربية
Inductive inference is the process of extracting general rules from specific observations. This problem also arises in the analysis of biological networks, such as genetic regulatory networks, where the interactions are complex and the observations are incomplete. A typical task in these problems is to extract general interaction rules as combinations of Boolean covariates, that explain a measured response variable. The inductive inference process can be considered as an incompletely specified Boolean function synthesis problem. This incompleteness of the problem will also generate spurious inferences, which are a serious threat to valid inductive inference rules. Using random Boolean data as a null model, here we attempt to measure the competition between valid and spurious inductive inference rules from a given data set. We formulate two greedy search algorithms, which synthesize a given Boolean response variable in a sparse disjunct normal form, and respectively a sparse generalized algebraic normal form of the variables from the observation data, and we evaluate numerically their performance.
A theory of additive Markov chains with long-range memory is used for description of correlation properties of coarse-grained literary texts. The complex structure of the correlations in texts is revealed. Antipersistent correlations at small distanc
We introduce a contrarian opinion (CO) model in which a fraction p of contrarians within a group holds a strong opinion opposite to the opinion held by the rest of the group. At the initial stage, stable clusters of two opinions, A and B exist. Then
In this work we investigate the origin of the parabolic relation between skewness and kurtosis often encountered in the analysis of experimental time-series. We argue that the numerical values of the coefficients of the curve may provide informations
We define an entropy based on a chosen governing probability distribution. If a certain kind of measurements follow such a distribution it also gives us a suitable scale to study it with. This scale will appear as a link function that is applied to t
While designing inductive bias in neural architectures has been widely studied, we hypothesize that transformer networks are flexible enough to learn inductive bias from suitable generic tasks. Here, we replace architecture engineering by encoding in