Do you want to publish a course? Click here

Motor-Imagery-Based Brain Computer Interface using Signal Derivation and Aggregation Functions

224   0   0.0 ( 0 )
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Brain Computer Interface technologies are popular methods of communication between the human brain and external devices. One of the most popular approaches to BCI is Motor Imagery. In BCI applications, the ElectroEncephaloGraphy is a very popular measurement for brain dynamics because of its non-invasive nature. Although there is a high interest in the BCI topic, the performance of existing systems is still far from ideal, due to the difficulty of performing pattern recognition tasks in EEG signals. BCI systems are composed of a wide range of components that perform signal pre-processing, feature extraction and decision making. In this paper, we define a BCI Framework, named Enhanced Fusion Framework, where we propose three different ideas to improve the existing MI-based BCI frameworks. Firstly, we include aan additional pre-processing step of the signal: a differentiation of the EEG signal that makes it time-invariant. Secondly, we add an additional frequency band as feature for the system and we show its effect on the performance of the system. Finally, we make a profound study of how to make the final decision in the system. We propose the usage of both up to six types of different classifiers and a wide range of aggregation functions (including classical aggregations, Choquet and Sugeno integrals and their extensions and overlap functions) to fuse the information given by the considered classifiers. We have tested this new system on a dataset of 20 volunteers performing motor imagery-based brain-computer interface experiments. On this dataset, the new system achieved a 88.80% of accuracy. We also propose an optimized version of our system that is able to obtain up to 90,76%. Furthermore, we find that the pair Choquet/Sugeno integrals and overlap functions are the ones providing the best results.



rate research

Read More

In this work we study the use of moderate deviation functions to measure similarity and dissimilarity among a set of given interval-valued data. To do so, we introduce the notion of interval-valued moderate deviation function and we study in particular those interval-valued moderate deviation functions which preserve the width of the input intervals. Then, we study how to apply these functions to construct interval-valued aggregation functions. We have applied them in the decision making phase of two Motor-Imagery Brain Computer Interface frameworks, obtaining better results than those obtained using other numerical and intervalar aggregations.
Background: Common spatial pattern (CSP) has been widely used for feature extraction in the case of motor imagery (MI) electroencephalogram (EEG) recordings and in MI classification of brain-computer interface (BCI) applications. BCI usually requires relatively long EEG data for reliable classifier training. More specifically, before using general spatial patterns for feature extraction, a training dictionary from two different classes is used to construct a compound dictionary matrix, and the representation of the test samples in the filter band is estimated as a linear combination of the columns in the dictionary matrix. New method: To alleviate the problem of sparse small sample (SS) between frequency bands. We propose a novel sparse group filter bank model (SGFB) for motor imagery in BCI system. Results: We perform a task by representing residuals based on the categories corresponding to the non-zero correlation coefficients. Besides, we also perform joint sparse optimization with constrained filter bands in three different time windows to extract robust CSP features in a multi-task learning framework. To verify the effectiveness of our model, we conduct an experiment on the public EEG dataset of BCI competition to compare it with other competitive methods. Comparison with existing methods: Decent classification performance for different subbands confirms that our algorithm is a promising candidate for improving MI-based BCI performance.
Brain-computer interface (BCI) systems have potential as assistive technologies for individuals with severe motor impairments. Nevertheless, individuals must first participate in many training sessions to obtain adequate data for optimizing the classification algorithm and subsequently acquiring brain-based control. Such traditional training paradigms have been dubbed unengaging and unmotivating for users. In recent years, it has been shown that the synergy of virtual reality (VR) and a BCI can lead to increased user engagement. This study created a 3-class BCI with a rather elaborate EEG signal processing pipeline that heavily utilizes machine learning. The BCI initially presented sham feedback but was eventually driven by EEG associated with motor imagery. The BCI tasks consisted of motor imagery of the feet and left and right hands, which were used to navigate a single-path maze in VR. Ten of the eleven recruited participants achieved online performance superior to chance (p < 0.01), while the majority successfully completed more than 70% of the prescribed navigational tasks. These results indicate that the proposed paradigm warrants further consideration as neurofeedback BCI training tool. A paradigm that allows users, from their perspective, control from the outset without the need for prior data collection sessions.
Transfer learning (TL) has been widely used in motor imagery (MI) based brain-computer interfaces (BCIs) to reduce the calibration effort for a new subject, and demonstrated promising performance. While a closed-loop MI-based BCI system, after electroencephalogram (EEG) signal acquisition and temporal filtering, includes spatial filtering, feature engineering, and classification blocks before sending out the control signal to an external device, previous approaches only considered TL in one or two such components. This paper proposes that TL could be considered in all three components (spatial filtering, feature engineering, and classification) of MI-based BCIs. Furthermore, it is also very important to specifically add a data alignment component before spatial filtering to make the data from different subjects more consistent, and hence to facilitate subsequential TL. Offline calibration experiments on two MI datasets verified our proposal. Especially, integrating data alignment and sophisticated TL approaches can significantly improve the classification performance, and hence greatly reduces the calibration effort.
The human brain provides a range of functions such as expressing emotions, controlling the rate of breathing, etc., and its study has attracted the interest of scientists for many years. As machine learning models become more sophisticated, and bio-metric data becomes more readily available through new non-invasive technologies, it becomes increasingly possible to gain access to interesting biometric data that could revolutionize Human-Computer Interaction. In this research, we propose a method to assess and quantify human attention levels and their effects on learning. In our study, we employ a brain computer interface (BCI) capable of detecting brain wave activity and displaying the corresponding electroencephalograms (EEG). We train recurrent neural networks (RNNS) to identify the type of activity an individual is performing.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا