No Arabic abstract
During mechanical ventilation, patient-ventilator disharmony is frequently observed and may result in increased breathing effort, compromising the patients comfort and recovery. This circumstance requires clinical intervention and becomes challenging when verbal communication is difficult. In this work, we propose a brain computer interface (BCI) to automatically and non-invasively detect patient-ventilator disharmony from electroencephalographic (EEG) signals: a brain-ventilator interface (BVI). Our framework exploits the cortical activation provoked by the inspiratory compensation when the subject and the ventilator are desynchronized. Use of a one-class approach and Riemannian geometry of EEG covariance matrices allows effective classification of respiratory states. The BVI is validated on nine healthy subjects that performed different respiratory tasks that mimic a patient-ventilator disharmony. Classification performances, in terms of areas under ROC curves, are significantly improved using EEG signals compared to detection based on air flow. Reduction in the number of electrodes that can achieve discrimination can often be desirable (e.g. for portable BCI systems). By using an iterative channel selection technique, the Common Highest Order Ranking (CHOrRa), we find that a reduced set of electrodes (n=6) can slightly improve for an intra-subject configuration, and it still provides fairly good performances for a general inter-subject setting. Results support the discriminant capacity of our approach to identify anomalous respiratory states, by learning from a training set containing only normal respiratory epochs. The proposed framework opens the door to brain-ventilator interfaces for monitoring patients breathing comfort and adapting ventilator parameters to patient respiratory needs.
We describe the experimental procedures for a dataset that we have made publicly available at https://doi.org/10.5281/zenodo.2649006 in mat and csv formats. This dataset contains electroencephalographic (EEG) recordings of 25 subjects testing the Brain Invaders (Congedo, 2011), a visual P300 Brain-Computer Interface inspired by the famous vintage video game Space Invaders (Taito, Tokyo, Japan). The visual P300 is an event-related potential elicited by a visual stimulation, peaking 240-600 ms after stimulus onset. EEG data were recorded by 16 electrodes in an experiment that took place in the GIPSA-lab, Grenoble, France, in 2012 (Van Veen, 2013 and Congedo, 2013). Python code for manipulating the data is available at https://github.com/plcrodrigues/py.BI.EEG.2012-GIPSA. The ID of this dataset is BI.EEG.2012-GIPSA.
Riemannian geometry has been applied to Brain Computer Interface (BCI) for brain signals classification yielding promising results. Studying electroencephalographic (EEG) signals from their associated covariance matrices allows a mitigation of common sources of variability (electronic, electrical, biological) by constructing a representation which is invariant to these perturbations. While working in Euclidean space with covariance matrices is known to be error-prone, one might take advantage of algorithmic advances in information geometry and matrix manifold to implement methods for Symmetric Positive-Definite (SPD) matrices. This paper proposes a comprehensive review of the actual tools of information geometry and how they could be applied on covariance matrices of EEG. In practice, covariance matrices should be estimated, thus a thorough study of all estimators is conducted on real EEG dataset. As a main contribution, this paper proposes an online implementation of a classifier in the Riemannian space and its subsequent assessment in Steady-State Visually Evoked Potential (SSVEP) experimentations.
Brain Electroencephalography (EEG) classification is widely applied to analyze cerebral diseases in recent years. Unfortunately, invalid/noisy EEGs degrade the diagnosis performance and most previously developed methods ignore the necessity of EEG selection for classification. To this end, this paper proposes a novel maximum weight clique-based EEG selection approach, named mwcEEGs, to map EEG selection to searching maximum similarity-weighted cliques from an improved Fr{e}chet distance-weighted undirected EEG graph simultaneously considering edge weights and vertex weights. Our mwcEEGs improves the classification performance by selecting intra-clique pairwise similar and inter-clique discriminative EEGs with similarity threshold $delta$. Experimental results demonstrate the algorithm effectiveness compared with the state-of-the-art time series selection algorithms on real-world EEG datasets.
In this work we study the use of moderate deviation functions to measure similarity and dissimilarity among a set of given interval-valued data. To do so, we introduce the notion of interval-valued moderate deviation function and we study in particular those interval-valued moderate deviation functions which preserve the width of the input intervals. Then, we study how to apply these functions to construct interval-valued aggregation functions. We have applied them in the decision making phase of two Motor-Imagery Brain Computer Interface frameworks, obtaining better results than those obtained using other numerical and intervalar aggregations.
Brain-computer interface (BCI) technologies have been widely used in many areas. In particular, non-invasive technologies such as electroencephalography (EEG) or near-infrared spectroscopy (NIRS) have been used to detect motor imagery, disease, or mental state. It has been already shown in literature that the hybrid of EEG and NIRS has better results than their respective individual signals. The fusion algorithm for EEG and NIRS sources is the key to implement them in real-life applications. In this research, we propose three fusion methods for the hybrid of the EEG and NIRS-based brain-computer interface system: linear fusion, tensor fusion, and $p$th-order polynomial fusion. Firstly, our results prove that the hybrid BCI system is more accurate, as expected. Secondly, the $p$th-order polynomial fusion has the best classification results out of the three methods, and also shows improvements compared with previous studies. For a motion imagery task and a mental arithmetic task, the best detection accuracy in previous papers were 74.20% and 88.1%, whereas our accuracy achieved was 77.53% and 90.19% . Furthermore, unlike complex artificial neural network methods, our proposed methods are not as computationally demanding.