Do you want to publish a course? Click here

Motor cortex causally contributes to auditory word recognition following sensorimotor-enriched vocabulary training

80   0   0.0 ( 0 )
 Added by Brian Mathias
 Publication date 2020
  fields Biology
and research's language is English




Ask ChatGPT about the research

The role of the motor cortex in perceptual and cognitive functions is highly controversial. Here, we investigated the hypothesis that the motor cortex can be instrumental for translating foreign language vocabulary. Participants were trained on foreign language (L2) words and their native language translations over four consecutive days. L2 words were accompanied by complementary gestures (sensorimotor enrichment) or pictures (sensory enrichment). Following training, participants translated the auditorily-presented L2 words that they had learned and repetitive transcranial magnetic stimulation (rTMS) was applied to the bilateral posterior motor cortices. Compared to sham stimulation, effective perturbation by rTMS slowed down the translation of sensorimotor-enriched L2 words - but not sensory-enriched L2 words. This finding suggests that sensorimotor-enriched training induced changes in L2 representations within the motor cortex, which in turn facilitated the translation of L2 words. The motor cortex may play a causal role in precipitating sensorimotor-based learning benefits, and may directly aid in remembering the native language translations of foreign language words following sensorimotor-enriched training. These findings support multisensory theories of learning while challenging reactivation-based theories.



rate research

Read More

How dynamic interactions between nervous system regions in mammals performs online motor control remains an unsolved problem. Here we present a new approach using a minimal model comprising spinal cord, sensory and motor cortex, coupled by long connections that are plastic. It succeeds in learning how to perform reaching movements of a planar arm with 6 muscles in several directions from scratch. The model satisfies biological plausibility constraints, like neural implementation, transmission delays, local synaptic learning and continuous online learning. The model can go from motor babbling to reaching arbitrary targets in less than 10 minutes. However, because there is no cerebellum the movements are ataxic. As emergent properties, neural populations in motor cortex show directional tuning and oscillatory dynamics, and the spinal cord creates convergent force fields that add linearly. The model is extensible and may eventually lead to complete motor control simulation.
Thalamic relay cells fire action potentials that transmit information from retina to cortex. The amount of information that spike trains encode is usually estimated from the precision of spike timing with respect to the stimulus. Sensory input, however, is only one factor that influences neural activity. For example, intrinsic dynamics, such as oscillations of networks of neurons, also modulate firing pattern. Here, we asked if retinal oscillations might help to convey information to neurons downstream. Specifically, we made whole-cell recordings from relay cells to reveal retinal inputs (EPSPs) and thalamic outputs (spikes) and analyzed these events with information theory. Our results show that thalamic spike trains operate as two multiplexed channels. One channel, which occupies a low frequency band (<30 Hz), is encoded by average firing rate with respect to the stimulus and carries information about local changes in the image over time. The other operates in the gamma frequency band (40-80 Hz) and is encoded by spike time relative to the retinal oscillations. Because these oscillations involve extensive areas of the retina, it is likely that the second channel transmits information about global features of the visual scene. At times, the second channel conveyed even more information than the first.
96 - Rand Asswad 2021
The reconstruction mechanisms built by the human auditory system during sound reconstruction are still a matter of debate. The purpose of this study is to refine the auditory cortex model introduced in [9], and inspired by the geometrical modelling of vision. The algorithm transforms the degraded sound in an image in the time-frequency domain via a short-time Fourier transform. Such an image is then lifted in the Heisenberg group and it is reconstructed via a Wilson-Cowan differo-integral equation. Numerical experiments on a library of speech recordings are provided, showing the good reconstruction properties of the algorithm.
How natural communication sounds are spatially represented across the inferior colliculus, the main center of convergence for auditory information in the midbrain, is not known. The neural representation of the acoustic stimuli results from the interplay of locally differing input and the organization of spectral and temporal neural preferences that change gradually across the nucleus. This raises the question how similar the neural representation of the communication sounds is across these gradients of neural preferences, and whether it also changes gradually. Multi-unit cluster spike trains were recorded from guinea pigs presented with a spectrotemporally rich set of eleven species-specific communication sounds. Using cross-correlation, we analyzed the response similarity of spiking activity across a broad frequency range for similarly and differently frequency-tuned neurons. Furthermore, we separated the contribution of the stimulus to the correlations to investigate whether similarity is only attributable to the stimulus, or, whether interactions exist between the multi-unit clusters that lead to correlations and whether these follow the same representation as the response similarity. We found that similarity of responses is dependent on the neurons spatial distance for similarly and differently frequency-tuned neurons, and that similarity decreases gradually with spatial distance. Significant neural correlations exist, and contribute to the response similarity. Our findings suggest that for multi-unit clusters in the mammalian inferior colliculus, the gradual response similarity with spatial distance to natural complex sounds is shaped by neural interactions and the gradual organization of neural preferences.
In the realm of motor control, artificial agents cannot match the performance of their biological counterparts. We thus explore a neural control architecture that is both biologically plausible, and capable of fully autonomous learning. The architecture consists of feedback controllers that learn to achieve a desired state by selecting the errors that should drive them. This selection happens through a family of differential Hebbian learning rules that, through interaction with the environment, can learn to control systems where the error responds monotonically to the control signal. We next show that in a more general case, neural reinforcement learning can be coupled with a feedback controller to reduce errors that arise non-monotonically from the control signal. The use of feedback control reduces the complexity of the reinforcement learning problem, because only a desired value must be learned, with the controller handling the details of how it is reached. This makes the function to be learned simpler, potentially allowing to learn more complex actions. We discuss how this approach could be extended to hierarchical architectures.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا