No Arabic abstract
Mind-wandering (MW), which usually defined as a lapse of attention, occurs between 20%-40% of the time, has negative effects on our daily life. Therefore, detecting when MW occurs can prevent us from those negative outcomes resulting from MW, such as failing to keep track of course during learning. In this work, we first collect a multi-modal Sustained Attention to Response Task (MM-SART) database for detecting MW. Eighty-two participants data are collected in our experiments. For each participant, we collect measures of 32-channels electroencephalogram (EEG) signals, photoplethysmography (PPG) signals, galvanic skin response (GSR) signals, eye tracker signals, and several questionnaires for detailed analyses. Then, we propose an effective MW detection system based on the collected EEG signals. To explore the non-linear characteristics of EEG signals, we utilize the entropy-based features in time, frequency, and wavelet domains. The experimental results show that we can reach 0.712 AUC score by using the random forest (RF) classifier with the leave-one-subject-out cross-validation. Moreover, to lower the overall computational complexity of the MW detection system, we apply techniques of channel selection and feature selection. By using the only two most significant EEG channels, we can reduce the training time of the classifier by 44.16%. By performing correlation importance feature elimination (CIFE) on the feature set, we can further improve the AUC score to 0.725 but with only 14.6% of the selection time compared with the recursive feature elimination (RFE) method. By proposing the MW detection engine, current work can be applied to educational scenarios, especially in the era of remote learning nowadays.
Epilepsy is one of the most crucial neurological disorders, and its early diagnosis will help the clinicians to provide accurate treatment for the patients. The electroencephalogram (EEG) signals are widely used for epileptic seizures detection, which provides specialists with substantial information about the functioning of the brain. In this paper, a novel diagnostic procedure using fuzzy theory and deep learning techniques are introduced. The proposed method is evaluated on the Bonn University dataset with six classification combinations and also on the Freiburg dataset. The tunable-Q wavelet transform (TQWT) is employed to decompose the EEG signals into different sub-bands. In the feature extraction step, 13 different fuzzy entropies are calculated from different sub-bands of TQWT, and their computational complexities are calculated to help researchers choose the best feature sets. In the following, an autoencoder (AE) with six layers is employed for dimensionality reduction. Finally, the standard adaptive neuro-fuzzy inference system (ANFIS), and also its variants with grasshopper optimization algorithm (ANFIS-GOA), particle swarm optimization (ANFIS-PSO), and breeding swarm optimization (ANFIS-BS) methods are used for classification. Using our proposed method, ANFIS-BS method has obtained an accuracy of 99.74% in classifying into two classes and an accuracy of 99.46% in ternary classification on the Bonn dataset and 99.28% on the Freiburg dataset, reaching state-of-the-art performances on both of them.
A brain-computer interface (BCI) is used not only to control external devices for healthy people but also to rehabilitate motor functions for motor-disabled patients. Decoding movement intention is one of the most significant aspects for performing arm movement tasks using brain signals. Decoding movement execution (ME) from electroencephalogram (EEG) signals have shown high performance in previous works, however movement imagination (MI) paradigm-based intention decoding has so far failed to achieve sufficient accuracy. In this study, we focused on a robust MI decoding method with transfer learning for the ME and MI paradigm. We acquired EEG data related to arm reaching for 3D directions. We proposed a BCI-transfer learning method based on a Relation network (BTRN) architecture. Decoding performances showed the highest performance compared to conventional works. We confirmed the possibility of the BTRN architecture to contribute to continuous decoding of MI using ME datasets.
In this work, we present a neuromorphic system that combines for the first time a neural recording headstage with a signal-to-spike conversion circuit and a multi-core spiking neural network (SNN) architecture on the same die for recording, processing, and detecting High Frequency Oscillations (HFO), which are biomarkers for the epileptogenic zone. The device was fabricated using a standard 0.18$mu$m CMOS technology node and has a total area of 99mm$^{2}$. We demonstrate its application to HFO detection in the iEEG recorded from 9 patients with temporal lobe epilepsy who subsequently underwent epilepsy surgery. The total average power consumption of the chip during the detection task was 614.3$mu$W. We show how the neuromorphic system can reliably detect HFOs: the system predicts postsurgical seizure outcome with state-of-the-art accuracy, specificity and sensitivity (78%, 100%, and 33% respectively). This is the first feasibility study towards identifying relevant features in intracranial human data in real-time, on-chip, using event-based processors and spiking neural networks. By providing neuromorphic intelligence to neural recording circuits the approach proposed will pave the way for the development of systems that can detect HFO areas directly in the operation room and improve the seizure outcome of epilepsy surgery.
We present a novel solution to the problem of localization of MEG and EEG brain signals. The solution is sequential and iterative, and is based on minimizing the least-squares (LS)criterion by the Alternating Projection (AP) algorithm, which is well known in the context of array signal processing. Unlike existing scanning solutions belonging to the beamformer and multiple-signal classification (MUSIC) families, the algorithm has good performance in low signal-to-noise ratio (SNR) and can cope with closely spaced sources and any mixture of correlated sources. Results from simulated and experimental MEG data from a real phantom demonstrated robust performance across an extended SNR range, the entire inter-source correlation range, and across multiple sources, with consistently superior localization accuracy than popular scanning methods.
The Brain-Computer Interface system is a profoundly developing area of experimentation for Motor activities which plays vital role in decoding cognitive activities. Classification of Cognitive-Motor Imagery activities from EEG signals is a critical task. Hence proposed a unique algorithm for classifying left/right-hand movements by utilizing Multi-layer Perceptron Neural Network. Handcrafted statistical Time domain and Power spectral density frequency domain features were extracted and obtained a combined accuracy of 96.02%. Results were compared with the deep learning framework. In addition to accuracy, Precision, F1-Score, and recall was considered as the performance metrics. The intervention of unwanted signals contaminates the EEG signals which influence the performance of the algorithm. Therefore, a novel approach was approached to remove the artifacts using Independent Components Analysis which boosted the performance. Following the selection of appropriate feature vectors that provided acceptable accuracy. The same method was used on all nine subjects. As a result, intra-subject accuracy was obtained for 9 subjects 94.72%. The results show that the proposed approach would be useful to classify the upper limb movements accurately.