No Arabic abstract
Transfer learning (TL) has been widely used in motor imagery (MI) based brain-computer interfaces (BCIs) to reduce the calibration effort for a new subject, and demonstrated promising performance. While a closed-loop MI-based BCI system, after electroencephalogram (EEG) signal acquisition and temporal filtering, includes spatial filtering, feature engineering, and classification blocks before sending out the control signal to an external device, previous approaches only considered TL in one or two such components. This paper proposes that TL could be considered in all three components (spatial filtering, feature engineering, and classification) of MI-based BCIs. Furthermore, it is also very important to specifically add a data alignment component before spatial filtering to make the data from different subjects more consistent, and hence to facilitate subsequential TL. Offline calibration experiments on two MI datasets verified our proposal. Especially, integrating data alignment and sophisticated TL approaches can significantly improve the classification performance, and hence greatly reduces the calibration effort.
We introduce here the idea of Meta-Learning for training EEG BCI decoders. Meta-Learning is a way of training machine learning systems so they learn to learn. We apply here meta-learning to a simple Deep Learning BCI architecture and compare it to transfer learning on the same architecture. Our Meta-learning strategy operates by finding optimal parameters for the BCI decoder so that it can quickly generalise between different users and recording sessions -- thereby also generalising to new users or new sessions quickly. We tested our algorithm on the Physionet EEG motor imagery dataset. Our approach increased motor imagery classification accuracy between 60% to 80%, outperforming other algorithms under the little-data condition. We believe that establishing the meta-learning or learning-to-learn approach will help neural engineering and human interfacing with the challenges of quickly setting up decoders of neural signals to make them more suitable for daily-life.
Brain-computer interface (BCI) systems have potential as assistive technologies for individuals with severe motor impairments. Nevertheless, individuals must first participate in many training sessions to obtain adequate data for optimizing the classification algorithm and subsequently acquiring brain-based control. Such traditional training paradigms have been dubbed unengaging and unmotivating for users. In recent years, it has been shown that the synergy of virtual reality (VR) and a BCI can lead to increased user engagement. This study created a 3-class BCI with a rather elaborate EEG signal processing pipeline that heavily utilizes machine learning. The BCI initially presented sham feedback but was eventually driven by EEG associated with motor imagery. The BCI tasks consisted of motor imagery of the feet and left and right hands, which were used to navigate a single-path maze in VR. Ten of the eleven recruited participants achieved online performance superior to chance (p < 0.01), while the majority successfully completed more than 70% of the prescribed navigational tasks. These results indicate that the proposed paradigm warrants further consideration as neurofeedback BCI training tool. A paradigm that allows users, from their perspective, control from the outset without the need for prior data collection sessions.
Brain Computer Interface technologies are popular methods of communication between the human brain and external devices. One of the most popular approaches to BCI is Motor Imagery. In BCI applications, the ElectroEncephaloGraphy is a very popular measurement for brain dynamics because of its non-invasive nature. Although there is a high interest in the BCI topic, the performance of existing systems is still far from ideal, due to the difficulty of performing pattern recognition tasks in EEG signals. BCI systems are composed of a wide range of components that perform signal pre-processing, feature extraction and decision making. In this paper, we define a BCI Framework, named Enhanced Fusion Framework, where we propose three different ideas to improve the existing MI-based BCI frameworks. Firstly, we include aan additional pre-processing step of the signal: a differentiation of the EEG signal that makes it time-invariant. Secondly, we add an additional frequency band as feature for the system and we show its effect on the performance of the system. Finally, we make a profound study of how to make the final decision in the system. We propose the usage of both up to six types of different classifiers and a wide range of aggregation functions (including classical aggregations, Choquet and Sugeno integrals and their extensions and overlap functions) to fuse the information given by the considered classifiers. We have tested this new system on a dataset of 20 volunteers performing motor imagery-based brain-computer interface experiments. On this dataset, the new system achieved a 88.80% of accuracy. We also propose an optimized version of our system that is able to obtain up to 90,76%. Furthermore, we find that the pair Choquet/Sugeno integrals and overlap functions are the ones providing the best results.
In this work we study the use of moderate deviation functions to measure similarity and dissimilarity among a set of given interval-valued data. To do so, we introduce the notion of interval-valued moderate deviation function and we study in particular those interval-valued moderate deviation functions which preserve the width of the input intervals. Then, we study how to apply these functions to construct interval-valued aggregation functions. We have applied them in the decision making phase of two Motor-Imagery Brain Computer Interface frameworks, obtaining better results than those obtained using other numerical and intervalar aggregations.
Background: Common spatial pattern (CSP) has been widely used for feature extraction in the case of motor imagery (MI) electroencephalogram (EEG) recordings and in MI classification of brain-computer interface (BCI) applications. BCI usually requires relatively long EEG data for reliable classifier training. More specifically, before using general spatial patterns for feature extraction, a training dictionary from two different classes is used to construct a compound dictionary matrix, and the representation of the test samples in the filter band is estimated as a linear combination of the columns in the dictionary matrix. New method: To alleviate the problem of sparse small sample (SS) between frequency bands. We propose a novel sparse group filter bank model (SGFB) for motor imagery in BCI system. Results: We perform a task by representing residuals based on the categories corresponding to the non-zero correlation coefficients. Besides, we also perform joint sparse optimization with constrained filter bands in three different time windows to extract robust CSP features in a multi-task learning framework. To verify the effectiveness of our model, we conduct an experiment on the public EEG dataset of BCI competition to compare it with other competitive methods. Comparison with existing methods: Decent classification performance for different subbands confirms that our algorithm is a promising candidate for improving MI-based BCI performance.