ترغب بنشر مسار تعليمي؟ اضغط هنا

The cerebellum could solve the motor error problem through error increase prediction

141   0   0.0 ( 0 )
 نشر من قبل Sergio Verduzco-Flores
 تاريخ النشر 2014
  مجال البحث علم الأحياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a cerebellar architecture with two main characteristics. The first one is that complex spikes respond to increases in sensory errors. The second one is that cerebellar modules associate particular contexts where errors have increased in the past with corrective commands that stop the increase in error. We analyze our architecture formally and computationally for the case of reaching in a 3D environment. In the case of motor control, we show that there are synergies of this architecture with the Equilibrium-Point hypothesis, leading to novel ways to solve the motor error problem. In particular, the presence of desired equilibrium lengths for muscles provides a way to know when the error is increasing, and which corrections to apply. In the context of Threshold Control Theory and Perceptual Control Theory we show how to extend our model so it implements anticipative corrections in cascade control systems that span from muscle contractions to cognitive operations.



قيم البحث

اقرأ أيضاً

In low-level sensory systems, it is still unclear how the noisy information collected locally by neurons may give rise to a coherent global percept. This is well demonstrated for the detection of motion in the aperture problem: as luminance of an elo ngated line is symmetrical along its axis, tangential velocity is ambiguous when measured locally. Here, we develop the hypothesis that motion-based predictive coding is sufficient to infer global motion. Our implementation is based on a context-dependent diffusion of a probabilistic representation of motion. We observe in simulations a progressive solution to the aperture problem similar to physio-logy and behavior. We demonstrate that this solution is the result of two underlying mechanisms. First, we demonstrate the formation of a tracking behavior favoring temporally coherent features independent of their texture. Second, we observe that incoherent features are explained away, while coherent information diffuses progressively to the global scale. Most previous models included ad hoc mechanisms such as end-stopped cells or a selection layer to track specific luminance-based features as necessary conditions to solve the aperture problem. Here, we have proved that motion-based predictive coding, as it is implemented in this functional model, is sufficient to solve the aperture problem. This solution may give insights into the role of prediction underlying a large class of sensory computations.
Backpropagation (BP) uses detailed, unit-specific feedback to train deep neural networks (DNNs) with remarkable success. That biological neural circuits appear to perform credit assignment, but cannot implement BP, implies the existence of other powe rful learning algorithms. Here, we explore the extent to which a globally broadcast learning signal, coupled with local weight updates, enables training of DNNs. We present both a learning rule, called global error-vector broadcasting (GEVB), and a class of DNNs, called vectorized nonnegative networks (VNNs), in which this learning rule operates. VNNs have vector-valued units and nonnegative weights past the first layer. The GEVB learning rule generalizes three-factor Hebbian learning, updating each weight by an amount proportional to the inner product of the presynaptic activation and a globally broadcast error vector when the postsynaptic unit is active. We prove that these weight updates are matched in sign to the gradient, enabling accurate credit assignment. Moreover, at initialization, these updates are exactly proportional to the gradient in the limit of infinite network width. GEVB matches the performance of BP in VNNs, and in some cases outperforms direct feedback alignment (DFA) applied in conventional networks. Unlike DFA, GEVB successfully trains convolutional layers. Altogether, our theoretical and empirical results point to a surprisingly powerful role for a global learning signal in training DNNs.
In this paper, we address the probabilistic error quantification of a general class of prediction methods. We consider a given prediction model and show how to obtain, through a sample-based approach, a probabilistic upper bound on the absolute value of the prediction error. The proposed scheme is based on a probabilistic scaling methodology in which the number of required randomized samples is independent of the complexity of the prediction model. The methodology is extended to address the case in which the probabilistic uncertain quantification is required to be valid for every member of a finite family of predictors. We illustrate the results of the paper by means of a numerical example.
While the channel capacity reflects a theoretical upper bound on the achievable information transmission rate in the limit of infinitely many bits, it does not characterise the information transfer of a given encoding routine with finitely many bits. In this note, we characterise the quality of a code (i. e. a given encoding routine) by an upper bound on the expected minimum error probability that can be achieved when using this code. We show that for equientropic channels this upper bound is minimal for codes with maximal marginal entropy. As an instructive example we show for the additive white Gaussian noise (AWGN) channel that random coding---also a capacity achieving code---indeed maximises the marginal entropy in the limit of infinite messages.
Fifty years have passed since David Marr, Masao Ito, and James Albus proposed seminal models of cerebellar functions. These models share the essential concept that parallel-fiber-Purkinje-cell synapses undergo plastic changes, guided by climbing-fibe r activities during sensorimotor learning. However, they differ in several important respects, including holistic versus complementary roles of the cerebellum, pattern recognition versus control as computational objectives, potentiation versus depression of synaptic plasticity, teaching signals versus error signals transmitted by climbing-fibers, sparse expansion coding by granule cells, and cerebellar internal models. In this review, we evaluate the different features of the three models based on recent computational and experimental studies. While acknowledging that the three models have greatly advanced our understanding of cerebellar control mechanisms in eye movements and classical conditioning, we propose a new direction for computational frameworks of the cerebellum. That is, hierarchical reinforcement learning with multiple internal models.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا