No Arabic abstract
We describe the experimental procedures for a dataset that we have made publicly available at https://doi.org/10.5281/zenodo.1494163 in mat and csv formats. This dataset contains electroencephalographic (EEG) recordings of 24 subjects doing a visual P300 Brain-Computer Interface experiment on PC. The visual P300 is an event-related potential elicited by visual stimulation, peaking 240-600 ms after stimulus onset. The experiment was designed in order to compare the use of a P300-based brain-computer interface on a PC with and without adaptive calibration using Riemannian geometry. The brain-computer interface is based on electroencephalography (EEG). EEG data were recorded thanks to 16 electrodes. Data were recorded during an experiment taking place in the GIPSA-lab, Grenoble, France, in 2013 (Congedo, 2013). Python code for manipulating the data is available at https://github.com/plcrodrigues/py.BI.EEG.2013-GIPSA. The ID of this dataset is BI.EEG.2013-GIPSA.
Objective: Previous works using a visual P300-based speller have reported an improvement modifying the shape or colour of the presented stimulus. However, the effects of both blended factors have not been yet studied. Thus, the aim of the present work was to study both factors and assess the interaction between them. Method: Fifteen naive participants tested four different spellers in a calibration and online task. All spellers were similar except the employed illumination of the target stimulus: white letters, white blocks, coloured letters, and coloured blocks. Results: The block-shaped conditions offered an improvement versus the letter-shaped conditions in the calibration (accuracy) and online (accuracy and correct commands per minute) tasks. Analysis of the P300 waveform showed a larger difference between target and no target stimulus waveforms for the block-shaped conditions versus the letter-shaped. The hypothesis regarding the colour heterogeneity of the stimuli was not found at any level of the analysis. Conclusion: The use of block-shaped illumination demonstrated a better performance than the standard letter-shaped flashing stimuli in classification performance, correct commands per minute, and P300 waveform.
Brain-computer interface (BCI) technologies have been widely used in many areas. In particular, non-invasive technologies such as electroencephalography (EEG) or near-infrared spectroscopy (NIRS) have been used to detect motor imagery, disease, or mental state. It has been already shown in literature that the hybrid of EEG and NIRS has better results than their respective individual signals. The fusion algorithm for EEG and NIRS sources is the key to implement them in real-life applications. In this research, we propose three fusion methods for the hybrid of the EEG and NIRS-based brain-computer interface system: linear fusion, tensor fusion, and $p$th-order polynomial fusion. Firstly, our results prove that the hybrid BCI system is more accurate, as expected. Secondly, the $p$th-order polynomial fusion has the best classification results out of the three methods, and also shows improvements compared with previous studies. For a motion imagery task and a mental arithmetic task, the best detection accuracy in previous papers were 74.20% and 88.1%, whereas our accuracy achieved was 77.53% and 90.19% . Furthermore, unlike complex artificial neural network methods, our proposed methods are not as computationally demanding.
In this exploratory study, we examine the possibilities of non-invasive Brain-Computer Interface (BCI) in the context of Smart Home Technology (SHT) targeted at older adults. During two workshops, one stationary, and one online via Zoom, we researched the insights of the end users concerning the potential of the BCI in the SHT setting. We explored its advantages and drawbacks, and the features older adults see as vital as well as the ones that they would benefit from. Apart from evaluating the participants perception of such devices during the two workshops we also analyzed some key considerations resulting from the insights gathered during the workshops, such as potential barriers, ways to mitigate them, strengths and opportunities connected to BCI. These may be useful for designing BCI interaction paradigms and pinpointing areas of interest to pursue in further studies.
Brain-computer interfaces (BCIs) can provide an alternative means of communication for individuals with severe neuromuscular limitations. The P300-based BCI speller relies on eliciting and detecting transient event-related potentials (ERPs) in electroencephalography (EEG) data, in response to a user attending to rarely occurring target stimuli amongst a series of non-target stimuli. However, in most P300 speller implementations, the stimuli to be presented are randomly selected from a limited set of options and stimulus selection and presentation are not optimized based on previous user data. In this work, we propose a data-driven method for stimulus selection based on the expected discrimination gain metric. The data-driven approach selects stimuli based on previously observed stimulus responses, with the aim of choosing a set of stimuli that will provide the most information about the users intended target character. Our approach incorporates knowledge of physiological and system constraints imposed due to real-time BCI implementation. Simulations were performed to compare our stimulus selection approach to the row-column paradigm, the conventional stimulus selection method for P300 spellers. Results from the simulations demonstrated that our adaptive stimulus selection approach has the potential to significantly improve performance from the conventional method: up to 34% improvement in accuracy and 43% reduction in the mean number of stimulus presentations required to spell a character in a 72-character grid. In addition, our greedy approach to stimulus selection provides the flexibility to accommodate design constraints.
We describe the experimental procedures for a dataset that we have made publicly available at https://doi.org/10.5281/zenodo.2649006 in mat and csv formats. This dataset contains electroencephalographic (EEG) recordings of 25 subjects testing the Brain Invaders (Congedo, 2011), a visual P300 Brain-Computer Interface inspired by the famous vintage video game Space Invaders (Taito, Tokyo, Japan). The visual P300 is an event-related potential elicited by a visual stimulation, peaking 240-600 ms after stimulus onset. EEG data were recorded by 16 electrodes in an experiment that took place in the GIPSA-lab, Grenoble, France, in 2012 (Van Veen, 2013 and Congedo, 2013). Python code for manipulating the data is available at https://github.com/plcrodrigues/py.BI.EEG.2012-GIPSA. The ID of this dataset is BI.EEG.2012-GIPSA.