No Arabic abstract
The duality of sensory inference and motor control has been known since the 1960s and has recently been recognized as the commonality in computations required for the posterior distributions in Bayesian inference and the value functions in optimal control. Meanwhile, an intriguing question about the brain is why the entire neocortex shares a canonical six-layer architecture while its posterior and anterior halves are engaged in sensory processing and motor control, respectively. Here we consider the hypothesis that the sensory and motor cortical circuits implement the dual computations for Bayesian inference and optimal control, or perceptual and value-based decision making, respectively. We first review the classic duality of inference and control in linear quadratic systems and then review the correspondence between dynamic Bayesian inference and optimal control. Based on the architecture of the canonical cortical circuit, we explore how different cortical neurons may represent variables and implement computations.
Winner-Take-All (WTA) refers to the neural operation that selects a (typically small) group of neurons from a large neuron pool. It is conjectured to underlie many of the brains fundamental computational abilities. However, not much is known about the robustness of a spike-based WTA network to the inherent randomness of the input spike trains. In this work, we consider a spike-based $k$--WTA model wherein $n$ randomly generated input spike trains compete with each other based on their underlying statistics, and $k$ winners are supposed to be selected. We slot the time evenly with each time slot of length $1, ms$, and model the $n$ input spike trains as $n$ independent Bernoulli processes. The Bernoulli process is a good approximation of the popular Poisson process but is more biologically relevant as it takes the refractory periods into account. Due to the randomness in the input spike trains, no circuits can guarantee to successfully select the correct winners in finite time. We focus on analytically characterizing the minimal amount of time needed so that a target minimax decision accuracy (success probability) can be reached. We first derive an information-theoretic lower bound on the decision time. We show that to have a (minimax) decision error $le delta$ (where $delta in (0,1)$), the computation time of any WTA circuit is at least [ ((1-delta) log(k(n -k)+1) -1)T_{mathcal{R}}, ] where $T_{mathcal{R}}$ is a difficulty parameter of a WTA task that is independent of $delta$, $n$, and $k$. We then design a simple WTA circuit whose decision time is [ O( logfrac{1}{delta}+log k(n-k))T_{mathcal{R}}). ] It turns out that for any fixed $delta in (0,1)$, this decision time is order-optimal in terms of its scaling in $n$, $k$, and $T_{mathcal{R}}$.
Brain-computer interfaces (BCIs) have shown promising results in restoring motor function to individuals with spinal cord injury. These systems have traditionally focused on the restoration of upper extremity function; however, the lower extremities have received relatively little attention. Early feasibility studies used noninvasive electroencephalogram (EEG)-based BCIs to restore walking function to people with paraplegia. However, the limited spatiotemporal resolution of EEG signals restricted the application of these BCIs to elementary gait tasks, such as the initiation and termination of walking. To restore more complex gait functions, BCIs must accurately decode additional degrees of freedom from brain signals. In this study, we used subdurally recorded electrocorticogram (ECoG) signals from able-bodied subjects to design a decoder capable of predicting the walking state and step rate information. We recorded ECoG signals from the motor cortices of two individuals as they walked on a treadmill at different speeds. Our offline analysis demonstrated that the state information could be decoded from >16 minutes of ECoG data with an unprecedented accuracy of 99.8%. Additionally, using a Bayesian filter approach, we achieved an average correlation coefficient between the decoded and true step rates of 0.934. When combined, these decoders may yield decoding accuracies sufficient to safely operate present-day walking prostheses.
A powerful experimental approach for investigating computation in networks of biological neurons is the use of cultured dissociated cortical cells grown into networks on a multi-electrode array. Such preparations allow investigation of network development, activity, plasticity, responses to stimuli, and the effects of pharmacological agents. They also exhibit whole-culture pathological bursting; understanding the mechanisms that underlie this could allow creation of more useful cell cultures and possibly have medical applications.
Maximum Entropy models can be inferred from large data-sets to uncover how collective dynamics emerge from local interactions. Here, such models are employed to investigate neurons recorded by multielectrode arrays in the human and monkey cortex. Taking advantage of the separation of excitatory and inhibitory neuron types, we construct a model including this distinction. This approach allows to shed light upon differences between excitatory and inhibitory activity across different brain states such as wakefulness and deep sleep, in agreement with previous findings. Additionally, Maximum Entropy models can also unveil novel features of neuronal interactions, which are found to be dominated by pairwise interactions during wakefulness, but are population-wide during deep sleep. In particular, inhibitory neurons are observed to be strongly tuned to the inhibitory population. Overall, we demonstrate Maximum Entropy models can be useful to analyze data-sets with classified neuron types, and to reveal the respective roles of excitatory and inhibitory neurons in organizing coherent dynamics in the cerebral cortex.
Under the Bayesian brain hypothesis, behavioural variations can be attributed to different priors over generative model parameters. This provides a formal explanation for why individuals exhibit inconsistent behavioural preferences when confronted with similar choices. For example, greedy preferences are a consequence of confident (or precise) beliefs over certain outcomes. Here, we offer an alternative account of behavioural variability using Renyi divergences and their associated variational bounds. Renyi bounds are analogous to the variational free energy (or evidence lower bound) and can be derived under the same assumptions. Importantly, these bounds provide a formal way to establish behavioural differences through an $alpha$ parameter, given fixed priors. This rests on changes in $alpha$ that alter the bound (on a continuous scale), inducing different posterior estimates and consequent variations in behaviour. Thus, it looks as if individuals have different priors, and have reached different conclusions. More specifically, $alpha to 0^{+}$ optimisation leads to mass-covering variational estimates and increased variability in choice behaviour. Furthermore, $alpha to + infty$ optimisation leads to mass-seeking variational posteriors and greedy preferences. We exemplify this formulation through simulations of the multi-armed bandit task. We note that these $alpha$ parameterisations may be especially relevant, i.e., shape preferences, when the true posterior is not in the same family of distributions as the assumed (simpler) approximate density, which may be the case in many real-world scenarios. The ensuing departure from vanilla variational inference provides a potentially useful explanation for differences in behavioural preferences of biological (or artificial) agents under the assumption that the brain performs variational Bayesian inference.