Do you want to publish a course? Click here

Simulation Experiment of BCI Based on Imagined Speech EEG Decoding

302   0   0.0 ( 0 )
 Added by Kang Wang
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Brain Computer Interface (BCI) can help patients of neuromuscular diseases restore parts of the movement and communication abilities that they have lost. Most of BCIs rely on mapping brain activities to device instructions, but limited number of brain activities decides the limited abilities of BCIs. To deal with the problem of limited ablility of BCI, this paper verified the feasibility of constructing BCI based on decoding imagined speech electroencephalography (EEG). As sentences decoded from EEG can have rich meanings, BCIs based on EEG decoding can achieve numerous control instructions. By combining a modified EEG feature extraction mehtod with connectionist temporal classification (CTC), this paper simulated decoding imagined speech EEG using synthetic EEG data without help of speech signal. The performance of decoding model over synthetic data to a certain extent demonstrated the feasibility of constructing BCI based on imagined speech brain signal.



rate research

Read More

A brain-computer interface (BCI) is used not only to control external devices for healthy people but also to rehabilitate motor functions for motor-disabled patients. Decoding movement intention is one of the most significant aspects for performing arm movement tasks using brain signals. Decoding movement execution (ME) from electroencephalogram (EEG) signals have shown high performance in previous works, however movement imagination (MI) paradigm-based intention decoding has so far failed to achieve sufficient accuracy. In this study, we focused on a robust MI decoding method with transfer learning for the ME and MI paradigm. We acquired EEG data related to arm reaching for 3D directions. We proposed a BCI-transfer learning method based on a Relation network (BTRN) architecture. Decoding performances showed the highest performance compared to conventional works. We confirmed the possibility of the BTRN architecture to contribute to continuous decoding of MI using ME datasets.
Stroke is the leading cause of serious and long-term disability worldwide. Some studies have shown that motor imagery (MI) based BCI has a positive effect in poststroke rehabilitation. It could help patients promote the reorganization processes in the damaged brain regions. However, offline motor imagery and conventional online motor imagery with feedback (such as rewarding sounds and movements of an avatar) could not reflect the true intention of the patients. In this study, both virtual limbs and functional electrical stimulation (FES) were used as feedback to provide patients a closed-loop sensorimotor integration for motor rehabilitation. The FES system would activate if the user was imagining hand movement of instructed side. Ten stroke patients (7 male, aged 22-70 years, mean 49.5+-15.1) were involved in this study. All of them participated in BCI-FES rehabilitation training for 4 weeks.The average motor imagery accuracies of the ten patients in the last week were 71.3%, which has improved 3% than that in the first week. Five patients Fugl-Meyer Assessment (FMA) scores have been raised. Patient 6, who has have suffered from stroke over two years, achieved the greatest improvement after rehabilitation training (pre FMA: 20, post FMA: 35). In the aspect of brain patterns, the active patterns of the five patients gradually became centralized and shifted to sensorimotor areas (channel C3 and C4) and premotor area (channel FC3 and FC4).In this study, motor imagery based BCI and FES system were combined to provided stoke patients with a closed-loop sensorimotor integration for motor rehabilitation. Result showed evidences that the BCI-FES system is effective in restoring upper extremities motor function in stroke. In future work, more cases are needed to demonstrate its superiority over conventional therapy and explore the potential role of MI in poststroke rehabilitation.
The study reports the performance of Parkinsons disease (PD) patients to operate Motor-Imagery based Brain-Computer Interface (MI-BCI) and compares three selected pre-processing and classification approaches. The experiment was conducted on 7 PD patients who performed a total of 14 MI-BCI sessions targeting lower extremities. EEG was recorded during the initial calibration phase of each session, and the specific BCI models were produced by using Spectrally weighted Common Spatial Patterns (SpecCSP), Source Power Comodulation (SPoC) and Filter-Bank Common Spatial Patterns (FBCSP) methods. The results showed that FBCSP outperformed SPoC in terms of accuracy, and both SPoC and SpecCSP in terms of the false-positive ratio. The study also demonstrates that PD patients were capable of operating MI-BCI, although with lower accuracy.
At present, people usually use some methods based on convolutional neural networks (CNNs) for Electroencephalograph (EEG) decoding. However, CNNs have limitations in perceiving global dependencies, which is not adequate for common EEG paradigms with a strong overall relationship. Regarding this issue, we propose a novel EEG decoding method that mainly relies on the attention mechanism. The EEG data is firstly preprocessed and spatially filtered. And then, we apply attention transforming on the feature-channel dimension so that the model can enhance more relevant spatial features. The most crucial step is to slice the data in the time dimension for attention transforming, and finally obtain a highly distinguishable representation. At this time, global averaging pooling and a simple fully-connected layer are used to classify different categories of EEG data. Experiments on two public datasets indicate that the strategy of attention transforming effectively utilizes spatial and temporal features. And we have reached the level of the state-of-the-art in multi-classification of EEG, with fewer parameters. As far as we know, it is the first time that a detailed and complete method based on the transformer idea has been proposed in this field. It has good potential to promote the practicality of brain-computer interface (BCI). The source code can be found at: textit{https://github.com/anranknight/EEG-Transformer}.
People suffering from hearing impairment often have difficulties participating in conversations in so-called `cocktail party scenarios with multiple people talking simultaneously. Although advanced algorithms exist to suppress background noise in these situations, a hearing device also needs information on which of these speakers the user actually aims to attend to. The correct (attended) speaker can then be enhanced using this information, and all other speakers can be treated as background noise. Recent neuroscientific advances have shown that it is possible to determine the focus of auditory attention from non-invasive neurorecording techniques, such as electroencephalography (EEG). Based on these new insights, a multitude of auditory attention decoding (AAD) algorithms have been proposed, which could, combined with the appropriate speaker separation algorithms and miniaturized EEG sensor devices, lead to so-called neuro-steered hearing devices. In this paper, we provide a broad review and a statistically grounded comparative study of EEG-based AAD algorithms and address the main signal processing challenges in this field.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا