No Arabic abstract
Complexity measures are essential to understand complex systems and there are numerous definitions to analyze one-dimensional data. However, extensions of these approaches to two or higher-dimensional data, such as images, are much less common. Here, we reduce this gap by applying the ideas of the permutation entropy combined with a relative entropic index. We build up a numerical procedure that can be easily implemented to evaluate the complexity of two or higher-dimensional patterns. We work out this method in different scenarios where numerical experiments and empirical data were taken into account. Specifically, we have applied the method to i) fractal landscapes generated numerically where we compare our measures with the Hurst exponent; ii) liquid crystal textures where nematic-isotropic-nematic phase transitions were properly identified; iii) 12 characteristic textures of liquid crystals where the different values show that the method can distinguish different phases; iv) and Ising surfaces where our method identified the critical temperature and also proved to be stable.
We introduce a theoretical framework for understanding and predicting the complexity of sequence classification tasks, using a novel extension of the theory of Boolean function sensitivity. The sensitivity of a function, given a distribution over input sequences, quantifies the number of disjoint subsets of the input sequence that can each be individually changed to change the output. We argue that standard sequence classification methods are biased towards learning low-sensitivity functions, so that tasks requiring high sensitivity are more difficult. To that end, we show analytically that simple lexical classifiers can only express functions of bounded sensitivity, and we show empirically that low-sensitivity functions are easier to learn for LSTMs. We then estimate sensitivity on 15 NLP tasks, finding that sensitivity is higher on challenging tasks collected in GLUE than on simple text classification tasks, and that sensitivity predicts the performance both of simple lexical classifiers and of vanilla BiLSTMs without pretrained contextualized embeddings. Within a task, sensitivity predicts which inputs are hard for such simple models. Our results suggest that the success of massively pretrained contextual representations stems in part because they provide representations from which information can be extracted by low-sensitivity decoders.
We introduce a quantum version for the statistical complexity measure, in the context of quantum information theory, and use it as a signalling function of quantum order-disorder transitions. We discuss the possibility for such transitions to characterize interesting physical phenomena, as quantum phase transitions, or abrupt variations in the correlation distributions. We apply our measure to two exactly solvable Hamiltonian models, namely: the $1D$-Quantum Ising Model and the Heisenberg XXZ spin-$1/2$ chain. We also compute this measure for one-qubit and two-qubit reduced states for the considered models, and analyse its behaviour across its quantum phase transitions for finite system sizes as well as in the thermodynamic limit by using Bethe ansatz.
In isotope ratio mass spectrometry (IRMS), any sample (S) measurement is performed as a relative-difference ((S/W)di) from a working-lab-reference (W), but the result is evaluated relative to a recommended-standard (D): (S/D)di. It is thus assumed that different source specific results ((S1/D)di, (S2/D)di) would represent their sources (S1, S2), and be accurately intercomparable. However, the assumption has never been checked. In this manuscript we carry out this task by considering a system as CO2+-IRMS. We present a model for a priori predicting output-uncertainty. Our study shows that scale-conversion, even with the aid of auxiliary-reference-standard(s) Ai(s), cannot make (S/D)di free from W; and the ((S/W)di,(A1/W)di,(A2/W)di) To (S/D)di conversion-formula normally used in the literature is invalid. Besides, the latter-relation has been worked out, which leads to e.g., fJ([(S/W)dJCO2pmp%],[(A1/W)dJCO2pmp%],[(A2/W)dJCO2pmp%]) = ((S/D)dJCO2pm4.5p%); whereas FJ([(S/W)dJCO2pmp%],[(A1/W)dJCO2pmp%]) = ((S/D)dJCO2pm1.2p%). That is, contrary to the general belief (Nature 1978, 271, 534), the scale-conversion by employing one than two Ai-standards should ensure (S/D)di to be more accurate. However, a more valuable finding is that the transformation of any d-estimate into its absolute value helps improve accuracy, or any reverse-process enhances uncertainty. Thus, equally accurate though the absolute-estimates of isotopic-CO2 and constituent-elemental-isotopic abundance-ratios could be, in contradistinction any differential-estimate is shown to be less accurate. Further, for S and D to be similar, any absolute estimate is shown to turn out nearly absolute accurate but any (S/D)d value as really absurd. That is, estimated source specific absolute values, rather than corresponding differential results, should really represent their sources, and/ or be closely intercomparable.
We show that finding a graph realization with the minimum Randic index for a given degree sequence is solvable in polynomial time by formulating the problem as a minimum weight perfect b-matching problem. However, the realization found via this reduction is not guaranteed to be connected. Approximating the minimum weight b-matching problem subject to a connectivity constraint is shown to be NP-Hard. For instances in which the optimal solution to the minimum Randic index problem is not connected, we describe a heuristic to connect the graph using pairwise edge exchanges that preserves the degree sequence. In our computational experiments, the heuristic performs well and the Randic index of the realization after our heuristic is within 3% of the unconstrained optimal value on average. Although we focus on minimizing the Randic index, our results extend to maximizing the Randic index as well. Applications of the Randic index to synchronization of neuronal networks controlling respiration in mammals and to normalizing cortical thickness networks in diagnosing individuals with dementia are provided.
The heart beat data recorded from samples before and during meditation are analyzed using two different scaling analysis methods. These analyses revealed that mediation severely affects the long range correlation of heart beat of a normal heart. Moreover, it is found that meditation induces periodic behavior in the heart beat. The complexity of the heart rate variability is quantified using multiscale entropy analysis and recurrence analysis. The complexity of the heart beat during mediation is found to be more.