Do you want to publish a course? Click here

Implementing Permutations in the Brain and SVO Frequencies of Languages

36   0   0.0 ( 0 )
 Added by Denis Turcu
 Publication date 2021
  fields Biology
and research's language is English




Ask ChatGPT about the research

The subject-verb-object (SVO) word order prevalent in English is shared by about $42%$ of world languages. Another $45%$ of all languages follow the SOV order, $9%$ the VSO order, and fewer languages use the three remaining permutations. None of the many extant explanations of this phenomenon take into account the difficulty of implementing these permutations in the brain. We propose a plausible model of sentence generation inspired by the recently proposed Assembly Calculus framework of brain function. Our model results in a natural explanation of the uneven frequencies. Estimating the parameters of this model yields predictions of the relative difficulty of dis-inhibiting one brain area from another. Our model is based on the standard syntax tree, a simple binary tree with three leaves. Each leaf corresponds to one of the three parts of a basic sentence. The leaves can be activated through lock and unlock operations and the sequence of activation of the leaves implements a specific word order. More generally, we also formulate and algorithmically solve the problems of implementing a permutation of the leaves of any binary tree, and of selecting the permutation that is easiest to implement on a given binary tree.



rate research

Read More

Motor imagery-based brain-computer interfaces (BCIs) use an individuals ability to volitionally modulate localized brain activity as a therapy for motor dysfunction or to probe causal relations between brain activity and behavior. However, many individuals cannot learn to successfully modulate their brain activity, greatly limiting the efficacy of BCI for therapy and for basic scientific inquiry. Previous research suggests that coherent activity across diverse cognitive systems is a hallmark of individuals who can successfully learn to control the BCI. However, little is known about how these distributed networks interact through time to support learning. Here, we address this gap in knowledge by constructing and applying a multimodal network approach to decipher brain-behavior relations in motor imagery-based brain-computer interface learning using MEG. Specifically, we employ a minimally constrained matrix decomposition method (non-negative matrix factorization) to simultaneously identify regularized, covarying subgraphs of functional connectivity, to assess their similarity to task performance, and to detect their time-varying expression. Individuals also displayed marked variation in the spatial properties of subgraphs such as the connectivity between the frontal lobe and the rest of the brain, and in the temporal properties of subgraphs such as the stage of learning at which they reached maximum expression. From these observations, we posit a conceptual model in which certain subgraphs support learning by modulating brain activity in regions important for sustaining attention. To test this model, we use tools that stipulate regional dynamics on a networked system (network control theory), and find that good learners display a single subgraph whose temporal expression tracked performance and whose architecture supports easy modulation of brain regions important for attention.
Conventional neuroimaging analyses have revealed the computational specificity of localized brain regions, exploiting the power of the subtraction technique in fMRI and event-related potential analyses in EEG. Moving beyond this convention, many researchers have begun exploring network-based neurodynamics and coordination between brain regions as a function of behavioral parameters or environmental statistics; however, most approaches average evoked activity across the experimental session to study task-dependent networks. Here, we examined on-going oscillatory activity and use a methodology to estimate directionality in brain-behavior interactions. After source reconstruction, activity within specific frequency bands in a priori regions of interest was linked to continuous behavioral measurements, and we used a predictive filtering scheme to estimate the asymmetry between brain-to-behavior and behavior-to-brain prediction. We applied this approach to a simulated driving task and examine directed relationships between brain activity and continuous driving behavior (steering or heading error). Our results indicated that two neuro-behavioral states emerge in this naturalistic environment: a Proactive brain state that actively plans the response to the sensory information, and a Reactive brain state that processes incoming information and reacts to environmental statistics.
Self-organized criticality (SOC) refers to the ability of complex systems to evolve towards a 2nd-order phase transition at which interactions between system components lead to scale-invariant events beneficial for system performance. For the last two decades, considerable experimental evidence accumulated that the mammalian cortex with its diversity in cell types and connections might exhibit SOC. Here we review experimental findings of isolated, layered cortex preparations to self-organize towards four dynamical motifs identified in the cortex in vivo: up-states, oscillations, neuronal avalanches, and coherence potentials. During up-states, the synchronization observed for nested theta/gamma-oscillations embeds scale-invariant neuronal avalanches that exhibit robust power law scaling in size with a slope of -3/2 and a critical branching parameter of 1. This dynamical coordination, tracked in the local field potential (nLFP) and pyramidal neuron activity using 2-photon imaging, emerges autonomously in superficial layers of organotypic cortex cultures and acute cortex slices, is homeostatically regulated, displays separation of time scales, and reveals unique size vs. quiet time dependencies. A threshold operation identifies coherence potentials; avalanches that in addition maintain the precise time course of propagated synchrony. Avalanches emerge under conditions of external driving. Control parameters are established by the balance of excitation and inhibition (E/I) and the neuromodulator dopamine. This rich dynamical repertoire is not observed in dissociated cortex cultures, which lack cortical layers and exhibit dynamics similar to a 1st-order phase transition. The precise interactions between up-states, nested oscillations, avalanches, and coherence potentials in superficial cortical layers provide compelling evidence for SOC in the brain.
We consider the cyclic closure of a language, and its generalisation to the operators $C^k$ introduced by Brandstadt. We prove that the cyclic closure of an indexed language is indexed, and that if $L$ is a context-free language then $C^k(L)$ is indexed.
250 - Hideaki Shimazaki 2020
This article reviews how organisms learn and recognize the world through the dynamics of neural networks from the perspective of Bayesian inference, and introduces a view on how such dynamics is described by the laws for the entropy of neural activity, a paradigm that we call thermodynamics of the Bayesian brain. The Bayesian brain hypothesis sees the stimulus-evoked activity of neurons as an act of constructing the Bayesian posterior distribution based on the generative model of the external world that an organism possesses. A closer look at the stimulus-evoked activity at early sensory cortices reveals that feedforward connections initially mediate the stimulus-response, which is later modulated by input from recurrent connections. Importantly, not the initial response, but the delayed modulation expresses animals cognitive states such as awareness and attention regarding the stimulus. Using a simple generative model made of a spiking neural population, we reproduce the stimulus-evoked dynamics with the delayed feedback modulation as the process of the Bayesian inference that integrates the stimulus evidence and a prior knowledge with time-delay. We then introduce a thermodynamic view on this process based on the laws for the entropy of neural activity. This view elucidates that the process of the Bayesian inference works as the recently-proposed information-theoretic engine (neural engine, an analogue of a heat engine in thermodynamics), which allows us to quantify the perceptual capacity expressed in the delayed modulation in terms of entropy.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا