No Arabic abstract
Spike sorting plays an irreplaceable role in understanding brain codes. Traditional spike sorting technologies perform feature extraction and clustering separately after spikes are well detected. However, it may often cause many additional processes and further lead to low-accurate and/or unstable results especially when there are noises and/or overlapping spikes in datasets. To address these issues, in this paper, we proposed a unified optimisation model integrating feature extraction and clustering for spike sorting. Interestingly, instead of the widely used combination strategies, i.e., performing the principal component analysis (PCA) for spike feature extraction and K-means (KM) for clustering in sequence, we unified PCA and KM into one optimisation model, which reduces additional processes with fewer iteration times. Subsequently, by embedding the K-means++ strategy for initialising and a comparison updating rule in the solving process, the proposed model can well handle the noises and/or overlapping interference. Finally, taking the best of the clustering validity indices into the proposed model, we derive an automatic spike sorting method. Plenty of experimental results on both synthetic and real-world datasets confirm that our proposed method outperforms the related state-of-the-art approaches.
Epilepsy is a neurological disorder classified as the second most serious neurological disease known to humanity, after stroke. Localization of the epileptogenic zone is an important step for epileptic patient treatment, which starts with epileptic spike detection. The common practice for spike detection of brain signals is via visual scanning of the recordings, which is a subjective and a very time-consuming task. Motivated by that, this paper focuses on using machine learning for automatic detection of epileptic spikes in magnetoencephalography (MEG) signals. First, we used the Position Weight Matrix (PWM) method combined with a uniform quantizer to generate useful features. Second, the extracted features are classified using a Support Vector Machine (SVM) for the purpose of epileptic spikes detection. The proposed technique shows great potential in improving the spike detection accuracy and reducing the feature vector size. Specifically, the proposed technique achieved average accuracy up to 98% in using 5-folds cross-validation applied to a balanced dataset of 3104 samples. These samples are extracted from 16 subjects where eight are healthy and eight are epileptic subjects using a sliding frame of size of 100 samples-points with a step-size of 2 sample-points
Feature extraction is an efficient approach for alleviating the issue of dimensionality in high-dimensional data. As a popular self-supervised learning method, contrastive learning has recently garnered considerable attention. In this study, we proposed a unified framework based on a new perspective of contrastive learning (CL) that is suitable for both unsupervised and supervised feature extraction. The proposed framework first constructed two CL graph for uniquely defining the positive and negative pairs. Subsequently, the projection matrix was determined by minimizing the contrastive loss function. In addition, the proposed framework considered both similar and dissimilar samples to unify unsupervised and supervised feature extraction. Moreover, we propose the three specific methods: unsupervised contrastive learning method, supervised contrastive learning method 1 ,and supervised contrastive learning method 2. Finally, the numerical experiments on five real datasets demonstrated the superior performance of the proposed framework in comparison to the existing methods.
$textbf{Objective}$: To develop a multi-channel device event segmentation and feature extraction algorithm that is robust to changes in data distribution. $textbf{Methods}$: We introduce an adaptive transfer learning algorithm to classify and segment events from non-stationary multi-channel temporal data. Using a multivariate hidden Markov model (HMM) and Fishers linear discriminant analysis (FLDA) the algorithm adaptively adjusts to shifts in distribution over time. The proposed algorithm is unsupervised and learns to label events without requiring $textit{a priori}$ information about true event states. The procedure is illustrated on experimental data collected from a cohort in a human viral challenge (HVC) study, where certain subjects have disrupted wake and sleep patterns after exposure to a H1N1 influenza pathogen. $textbf{Results}$: Simulations establish that the proposed adaptive algorithm significantly outperforms other event classification methods. When applied to early time points in the HVC data the algorithm extracts sleep/wake features that are predictive of both infection and infection onset time. $textbf{Conclusion}$: The proposed transfer learning event segmentation method is robust to temporal shifts in data distribution and can be used to produce highly discriminative event-labeled features for health monitoring. $textbf{Significance}$: Our integrated multisensor signal processing and transfer learning method is applicable to many ambulatory monitoring applications.
The advent of large-scale and high-density extracellular recording devices allows simultaneous recording from thousands of neurons. However, the complexity and size of the data makes it mandatory to develop robust algorithms for fully automated spike sorting. Here it is shown that limitations imposed by biological constraints such as changes in spike waveforms induced under different drug regimes should be carefully taken into consideration in future developments.
Respiratory ailments afflict a wide range of people and manifests itself through conditions like asthma and sleep apnea. Continuous monitoring of chronic respiratory ailments is seldom used outside the intensive care ward due to the large size and cost of the monitoring system. While Electrocardiogram (ECG) based respiration extraction is a validated approach, its adoption is limited by access to a suitable continuous ECG monitor. Recently, due to the widespread adoption of wearable smartwatches with in-built Photoplethysmogram (PPG) sensor, it is being considered as a viable candidate for continuous and unobtrusive respiration monitoring. Research in this domain, however, has been predominantly focussed on estimating respiration rate from PPG. In this work, a novel end-to-end deep learning network called RespNet is proposed to perform the task of extracting the respiration signal from a given input PPG as opposed to extracting respiration rate. The proposed network was trained and tested on two different datasets utilizing different modalities of reference respiration signal recordings. Also, the similarity and performance of the proposed network against two conventional signal processing approaches for extracting respiration signal were studied. The proposed method was tested on two independent datasets with a Mean Squared Error of 0.262 and 0.145. The Cross-Correlation coefficient of the respective datasets were found to be 0.933 and 0.931. The reported errors and similarity was found to be better than conventional approaches. The proposed approach would aid clinicians to provide comprehensive evaluation of sleep-related respiratory conditions and chronic respiratory ailments while being comfortable and inexpensive for the patient.