Do you want to publish a course? Click here

Learning in brain-computer interface control evidenced by joint decomposition of brain and behavior

165   0   0.0 ( 0 )
 Added by Jennifer Stiso
 Publication date 2019
  fields Biology
and research's language is English




Ask ChatGPT about the research

Motor imagery-based brain-computer interfaces (BCIs) use an individuals ability to volitionally modulate localized brain activity as a therapy for motor dysfunction or to probe causal relations between brain activity and behavior. However, many individuals cannot learn to successfully modulate their brain activity, greatly limiting the efficacy of BCI for therapy and for basic scientific inquiry. Previous research suggests that coherent activity across diverse cognitive systems is a hallmark of individuals who can successfully learn to control the BCI. However, little is known about how these distributed networks interact through time to support learning. Here, we address this gap in knowledge by constructing and applying a multimodal network approach to decipher brain-behavior relations in motor imagery-based brain-computer interface learning using MEG. Specifically, we employ a minimally constrained matrix decomposition method (non-negative matrix factorization) to simultaneously identify regularized, covarying subgraphs of functional connectivity, to assess their similarity to task performance, and to detect their time-varying expression. Individuals also displayed marked variation in the spatial properties of subgraphs such as the connectivity between the frontal lobe and the rest of the brain, and in the temporal properties of subgraphs such as the stage of learning at which they reached maximum expression. From these observations, we posit a conceptual model in which certain subgraphs support learning by modulating brain activity in regions important for sustaining attention. To test this model, we use tools that stipulate regional dynamics on a networked system (network control theory), and find that good learners display a single subgraph whose temporal expression tracked performance and whose architecture supports easy modulation of brain regions important for attention.



rate research

Read More

We study the extent to which vibrotactile stimuli delivered to the head of a subject can serve as a platform for a brain computer interface (BCI) paradigm. Six head positions are used to evoke combined somatosensory and auditory (via the bone conduction effect) brain responses, in order to define a multimodal tactile and auditory brain computer interface (taBCI). Experimental results of subjects performing online taBCI, using stimuli with a moderately fast inter-stimulus interval (ISI), validate the taBCI paradigm, while the feasibility of the concept is illuminated through information transfer rate case studies.
Conventional neuroimaging analyses have revealed the computational specificity of localized brain regions, exploiting the power of the subtraction technique in fMRI and event-related potential analyses in EEG. Moving beyond this convention, many researchers have begun exploring network-based neurodynamics and coordination between brain regions as a function of behavioral parameters or environmental statistics; however, most approaches average evoked activity across the experimental session to study task-dependent networks. Here, we examined on-going oscillatory activity and use a methodology to estimate directionality in brain-behavior interactions. After source reconstruction, activity within specific frequency bands in a priori regions of interest was linked to continuous behavioral measurements, and we used a predictive filtering scheme to estimate the asymmetry between brain-to-behavior and behavior-to-brain prediction. We applied this approach to a simulated driving task and examine directed relationships between brain activity and continuous driving behavior (steering or heading error). Our results indicated that two neuro-behavioral states emerge in this naturalistic environment: a Proactive brain state that actively plans the response to the sensory information, and a Reactive brain state that processes incoming information and reacts to environmental statistics.
We present several deep learning models for assessing the morphometric fidelity of deep grey matter region models extracted from brain MRI. We test three different convolutional neural net architectures (VGGNet, ResNet and Inception) over 2D maps of geometric features. Further, we present a novel geometry feature augmentation technique based on a parametric spherical mapping. Finally, we present an approach for model decision visualization, allowing human raters to see the areas of subcortical shapes most likely to be deemed of failing quality by the machine. Our training data is comprised of 5200 subjects from the ENIGMA Schizophrenia MRI cohorts, and our test dataset contains 1500 subjects from the ENIGMA Major Depressive Disorder cohorts. Our final models reduce human rater time by 46-70%. ResNet outperforms VGGNet and Inception for all of our predictive tasks.
This paper proposes a novel topological learning framework that can integrate brain networks of different sizes and topology through persistent homology. This is possible through the introduction of a new topological loss function that enables such challenging task. The use of the proposed loss function bypasses the intrinsic computational bottleneck associated with matching networks. We validate the method in extensive statistical simulations with ground truth to assess the effectiveness of the topological loss in discriminating networks with different topology. The method is further applied to a twin brain imaging study in determining if the brain network is genetically heritable. The challenge is in overlaying the topologically different functional brain networks obtained from the resting-state functional MRI (fMRI) onto the template structural brain network obtained through the diffusion MRI (dMRI).
395 - Erwan Vaineau 2019
We describe the experimental procedures for a dataset that we have made publicly available at https://doi.org/10.5281/zenodo.1494163 in mat and csv formats. This dataset contains electroencephalographic (EEG) recordings of 24 subjects doing a visual P300 Brain-Computer Interface experiment on PC. The visual P300 is an event-related potential elicited by visual stimulation, peaking 240-600 ms after stimulus onset. The experiment was designed in order to compare the use of a P300-based brain-computer interface on a PC with and without adaptive calibration using Riemannian geometry. The brain-computer interface is based on electroencephalography (EEG). EEG data were recorded thanks to 16 electrodes. Data were recorded during an experiment taking place in the GIPSA-lab, Grenoble, France, in 2013 (Congedo, 2013). Python code for manipulating the data is available at https://github.com/plcrodrigues/py.BI.EEG.2013-GIPSA. The ID of this dataset is BI.EEG.2013-GIPSA.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا