Do you want to publish a course? Click here

PRAGMA: Interactively Constructing Functional Brain Parcellations

99   0   0.0 ( 0 )
 Added by Roza G. Bayrak
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

A prominent goal of neuroimaging studies is mapping the human brain, in order to identify and delineate functionally-meaningful regions and elucidate their roles in cognitive behaviors. These brain regions are typically represented by atlases that capture general trends over large populations. Despite being indispensable to neuroimaging experts, population-level atlases do not capture individual differences in functional organization. In this work, we present an interactive visualization method, PRAGMA, that allows domain experts to derive scan-specific parcellations from established atlases. PRAGMA features a user-driven, hierarchical clustering scheme for defining temporally correlated parcels in varying granularity. The visualization design supports the user in making decisions on how to perform clustering, namely when to expand, collapse, or merge parcels. This is accomplished through a set of linked and coordinated views for understanding the users current hierarchy, assessing intra-cluster variation, and relating parcellations to an established atlas. We assess the effectiveness of PRAGMA through a user study with four neuroimaging domain experts, where our results show that PRAGMA shows the potential to enable exploration of individualized and state-specific brain parcellations and to offer interesting insights into functional brain networks.



rate research

Read More

395 - Erwan Vaineau 2019
We describe the experimental procedures for a dataset that we have made publicly available at https://doi.org/10.5281/zenodo.1494163 in mat and csv formats. This dataset contains electroencephalographic (EEG) recordings of 24 subjects doing a visual P300 Brain-Computer Interface experiment on PC. The visual P300 is an event-related potential elicited by visual stimulation, peaking 240-600 ms after stimulus onset. The experiment was designed in order to compare the use of a P300-based brain-computer interface on a PC with and without adaptive calibration using Riemannian geometry. The brain-computer interface is based on electroencephalography (EEG). EEG data were recorded thanks to 16 electrodes. Data were recorded during an experiment taking place in the GIPSA-lab, Grenoble, France, in 2013 (Congedo, 2013). Python code for manipulating the data is available at https://github.com/plcrodrigues/py.BI.EEG.2013-GIPSA. The ID of this dataset is BI.EEG.2013-GIPSA.
The human brain provides a range of functions such as expressing emotions, controlling the rate of breathing, etc., and its study has attracted the interest of scientists for many years. As machine learning models become more sophisticated, and bio-metric data becomes more readily available through new non-invasive technologies, it becomes increasingly possible to gain access to interesting biometric data that could revolutionize Human-Computer Interaction. In this research, we propose a method to assess and quantify human attention levels and their effects on learning. In our study, we employ a brain computer interface (BCI) capable of detecting brain wave activity and displaying the corresponding electroencephalograms (EEG). We train recurrent neural networks (RNNS) to identify the type of activity an individual is performing.
Deficit of attention, anxiety, sleep disorders are some of the problems which affect many persons. As these issues can evolve into severe conditions, more factors should be taken into consideration. The paper proposes a conception which aims to help students to enhance their brain performance. An electrocephalogram headset is used to trigger the brainwaves, along with a web application which manages the input data which comes from the headset and from the user. Factors like current activity, mood, focus, stress, relaxation, engagement, excitement and interest are provided in numerical format through the use of the headset. The users offer information about their activities related to relaxation, listening to music, watching a movie, and studying. Based on the analysis, it was found that the users consider the application easy to use. As the users are more equilibrated emotionally, their results are improved. This allowed the persons to be more confident on themselves. In the case of students, the neurofeedback can be studied for the better sport and artistic performances, including the case of the attention deficit hyperactivity disorder. Aptitudes for a subject can be determined based on the relevant generated brainwaves. The learning environment is an important factor during the analysis of the results. Teachers, professors, students and parents can collaborate and, based on the gathered data, new teaching methods can be adopted in the classroom and at home. The proposed solution can guide the students while studying, as well as the persons who wish to be more productive while solving their tasks.
We describe the experimental procedures for a dataset that we have made publicly available at https://doi.org/10.5281/zenodo.2649006 in mat and csv formats. This dataset contains electroencephalographic (EEG) recordings of 25 subjects testing the Brain Invaders (Congedo, 2011), a visual P300 Brain-Computer Interface inspired by the famous vintage video game Space Invaders (Taito, Tokyo, Japan). The visual P300 is an event-related potential elicited by a visual stimulation, peaking 240-600 ms after stimulus onset. EEG data were recorded by 16 electrodes in an experiment that took place in the GIPSA-lab, Grenoble, France, in 2012 (Van Veen, 2013 and Congedo, 2013). Python code for manipulating the data is available at https://github.com/plcrodrigues/py.BI.EEG.2012-GIPSA. The ID of this dataset is BI.EEG.2012-GIPSA.
141 - Zhe Sun , Zihao Huang , Feng Duan 2020
Brain-computer interface (BCI) technologies have been widely used in many areas. In particular, non-invasive technologies such as electroencephalography (EEG) or near-infrared spectroscopy (NIRS) have been used to detect motor imagery, disease, or mental state. It has been already shown in literature that the hybrid of EEG and NIRS has better results than their respective individual signals. The fusion algorithm for EEG and NIRS sources is the key to implement them in real-life applications. In this research, we propose three fusion methods for the hybrid of the EEG and NIRS-based brain-computer interface system: linear fusion, tensor fusion, and $p$th-order polynomial fusion. Firstly, our results prove that the hybrid BCI system is more accurate, as expected. Secondly, the $p$th-order polynomial fusion has the best classification results out of the three methods, and also shows improvements compared with previous studies. For a motion imagery task and a mental arithmetic task, the best detection accuracy in previous papers were 74.20% and 88.1%, whereas our accuracy achieved was 77.53% and 90.19% . Furthermore, unlike complex artificial neural network methods, our proposed methods are not as computationally demanding.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا