ترغب بنشر مسار تعليمي؟ اضغط هنا

Classification of Upper Arm Movements from EEG signals using Machine Learning with ICA Analysis

71   0   0.0 ( 0 )
 نشر من قبل Amit Joshi Dr
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

The Brain-Computer Interface system is a profoundly developing area of experimentation for Motor activities which plays vital role in decoding cognitive activities. Classification of Cognitive-Motor Imagery activities from EEG signals is a critical task. Hence proposed a unique algorithm for classifying left/right-hand movements by utilizing Multi-layer Perceptron Neural Network. Handcrafted statistical Time domain and Power spectral density frequency domain features were extracted and obtained a combined accuracy of 96.02%. Results were compared with the deep learning framework. In addition to accuracy, Precision, F1-Score, and recall was considered as the performance metrics. The intervention of unwanted signals contaminates the EEG signals which influence the performance of the algorithm. Therefore, a novel approach was approached to remove the artifacts using Independent Components Analysis which boosted the performance. Following the selection of appropriate feature vectors that provided acceptable accuracy. The same method was used on all nine subjects. As a result, intra-subject accuracy was obtained for 9 subjects 94.72%. The results show that the proposed approach would be useful to classify the upper limb movements accurately.



قيم البحث

اقرأ أيضاً

Classifying limb movements using brain activity is an important task in Brain-computer Interfaces (BCI) that has been successfully used in multiple application domains, ranging from human-computer interaction to medical and biomedical applications. T his paper proposes a novel solution for classification of left/right hand movement by exploiting a Long Short-Term Memory (LSTM) network with attention mechanism to learn the electroencephalogram (EEG) time-series information. To this end, a wide range of time and frequency domain features are extracted from the EEG signals and used to train an LSTM network to perform the classification task. We conduct extensive experiments with the EEG Movement dataset and show that our proposed solution our method achieves improvements over several benchmarks and state-of-the-art methods in both intra-subject and cross-subject validation schemes. Moreover, we utilize the proposed framework to analyze the information as received by the sensors and monitor the activated regions of the brain by tracking EEG topography throughout the experiments.
Brain signals could be used to control devices to assist individuals with disabilities. Signals such as electroencephalograms are complicated and hard to interpret. A set of signals are collected and should be classified to identify the intention of the subject. Different approaches have tried to reduce the number of channels before sending them to a classifier. We are proposing a deep learning-based method for selecting an informative subset of channels that produce high classification accuracy. The proposed network could be trained for an individual subject for the selection of an appropriate set of channels. Reduction of the number of channels could reduce the complexity of brain-computer-interface devices. Our method could find a subset of channels. The accuracy of our approach is comparable with a model trained on all channels. Hence, our models temporal and power costs are low, while its accuracy is kept high.
A brain-computer interface (BCI) is used not only to control external devices for healthy people but also to rehabilitate motor functions for motor-disabled patients. Decoding movement intention is one of the most significant aspects for performing a rm movement tasks using brain signals. Decoding movement execution (ME) from electroencephalogram (EEG) signals have shown high performance in previous works, however movement imagination (MI) paradigm-based intention decoding has so far failed to achieve sufficient accuracy. In this study, we focused on a robust MI decoding method with transfer learning for the ME and MI paradigm. We acquired EEG data related to arm reaching for 3D directions. We proposed a BCI-transfer learning method based on a Relation network (BTRN) architecture. Decoding performances showed the highest performance compared to conventional works. We confirmed the possibility of the BTRN architecture to contribute to continuous decoding of MI using ME datasets.
108 - Jian Cui , Zirui Lan , Yisi Liu 2021
Driver drowsiness is one of main factors leading to road fatalities and hazards in the transportation industry. Electroencephalography (EEG) has been considered as one of the best physiological signals to detect drivers drowsy states, since it direct ly measures neurophysiological activities in the brain. However, designing a calibration-free system for driver drowsiness detection with EEG is still a challenging task, as EEG suffers from serious mental and physical drifts across different subjects. In this paper, we propose a compact and interpretable Convolutional Neural Network (CNN) to discover shared EEG features across different subjects for driver drowsiness detection. We incorporate the Global Average Pooling (GAP) layer in the model structure, allowing the Class Activation Map (CAM) method to be used for localizing regions of the input signal that contribute most for classification. Results show that the proposed model can achieve an average accuracy of 73.22% on 11 subjects for 2-class cross-subject EEG signal classification, which is higher than conventional machine learning methods and other state-of-art deep learning methods. It is revealed by the visualization technique that the model has learned biologically explainable features, e.g., Alpha spindles and Theta burst, as evidence for the drowsy state. It is also interesting to see that the model uses artifacts that usually dominate the wakeful EEG, e.g., muscle artifacts and sensor drifts, to recognize the alert state. The proposed model illustrates a potential direction to use CNN models as a powerful tool to discover shared features related to different mental states across different subjects from EEG signals.
Schizophrenia (SZ) is a mental disorder whereby due to the secretion of specific chemicals in the brain, the function of some brain regions is out of balance, leading to the lack of coordination between thoughts, actions, and emotions. This study pro vides various intelligent Deep Learning (DL)-based methods for automated SZ diagnosis via EEG signals. The obtained results are compared with those of conventional intelligent methods. In order to implement the proposed methods, the dataset of the Institute of Psychiatry and Neurology in Warsaw, Poland, has been used. First, EEG signals are divided into 25-seconds time frames and then were normalized by z-score or norm L2. In the classification step, two different approaches are considered for SZ diagnosis via EEG signals. In this step, the classification of EEG signals is first carried out by conventional DL methods, e.g., KNN, DT, SVM, Bayes, bagging, RF, and ET. Various proposed DL models, including LSTMs, 1D-CNNs, and 1D-CNN-LSTMs, are used in the following. In this step, the DL models were implemented and compared with different activation functions. Among the proposed DL models, the CNN-LSTM architecture has had the best performance. In this architecture, the ReLU activation function and the z-score and L2 combined normalization are used. The proposed CNN-LSTM model has achieved an accuracy percentage of 99.25%, better than the results of most former studies in this field. It is worth mentioning that in order to perform all simulations, the k-fold cross-validation method with k=5 has been used.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا