ﻻ يوجد ملخص باللغة العربية
The subject-verb-object (SVO) word order prevalent in English is shared by about $42%$ of world languages. Another $45%$ of all languages follow the SOV order, $9%$ the VSO order, and fewer languages use the three remaining permutations. None of the many extant explanations of this phenomenon take into account the difficulty of implementing these permutations in the brain. We propose a plausible model of sentence generation inspired by the recently proposed Assembly Calculus framework of brain function. Our model results in a natural explanation of the uneven frequencies. Estimating the parameters of this model yields predictions of the relative difficulty of dis-inhibiting one brain area from another. Our model is based on the standard syntax tree, a simple binary tree with three leaves. Each leaf corresponds to one of the three parts of a basic sentence. The leaves can be activated through lock and unlock operations and the sequence of activation of the leaves implements a specific word order. More generally, we also formulate and algorithmically solve the problems of implementing a permutation of the leaves of any binary tree, and of selecting the permutation that is easiest to implement on a given binary tree.
Motor imagery-based brain-computer interfaces (BCIs) use an individuals ability to volitionally modulate localized brain activity as a therapy for motor dysfunction or to probe causal relations between brain activity and behavior. However, many indiv
Conventional neuroimaging analyses have revealed the computational specificity of localized brain regions, exploiting the power of the subtraction technique in fMRI and event-related potential analyses in EEG. Moving beyond this convention, many rese
Self-organized criticality (SOC) refers to the ability of complex systems to evolve towards a 2nd-order phase transition at which interactions between system components lead to scale-invariant events beneficial for system performance. For the last tw
We consider the cyclic closure of a language, and its generalisation to the operators $C^k$ introduced by Brandstadt. We prove that the cyclic closure of an indexed language is indexed, and that if $L$ is a context-free language then $C^k(L)$ is indexed.
This article reviews how organisms learn and recognize the world through the dynamics of neural networks from the perspective of Bayesian inference, and introduces a view on how such dynamics is described by the laws for the entropy of neural activit