No Arabic abstract
In this work we demonstrate the use of neural networks for rapid extraction of signal parameters of discretely sampled signals. In particular, we use dense autoencoder networks to extract the parameters of interest from exponentially decaying signals and decaying oscillations. By using a three-stage training method and careful choice of the neural network size, we are able to retrieve the relevant signal parameters directly from the latent space of the autoencoder network at significantly improved rates compared to traditional algorithmic signal-analysis approaches. We show that the achievable precision and accuracy of this method of analysis is similar to conventional algorithm-based signal analysis methods, by demonstrating that the extracted signal parameters are approaching their fundamental parameter estimation limit as provided by the Cramer-Rao bound. Furthermore, we demonstrate that autoencoder networks are able to achieve signal analysis, and, hence, parameter extraction, at rates of 75 kHz, orders-of-magnitude faster than conventional techniques with similar precision. Finally, we explore the limitations of our approach, demonstrating that analysis rates of $>$200 kHz are feasible with further optimization of the transfer rate between the data-acquisition system and data-analysis system.
Physiological signals, such as the electrocardiogram and the phonocardiogram are very often corrupted by noisy sources. Usually, artificial intelligent algorithms analyze the signal regardless of its quality. On the other hand, physicians use a completely orthogonal strategy. They do not assess the entire recording, instead they search for a segment where the fundamental and abnormal waves are easily detected, and only then a prognostic is attempted. Inspired by this fact, a new algorithm that automatically selects an optimal segment for a post-processing stage, according to a criteria defined by the user is proposed. In the process, a Neural Network is used to compute the output state probability distribution for each sample. Using the aforementioned quantities, a graph is designed, whereas state transition constraints are physically imposed into the graph and a set of constraints are used to retrieve a subset of the recording that maximizes the likelihood function, proposed by the user. The developed framework is tested and validated in two applications. In both cases, the system performance is boosted significantly, e.g in heart sound segmentation, sensitivity increases 2.4% when compared to the standard approaches in the literature.
Cardiovascular diseases are one of the most severe causes of mortality, taking a heavy toll of lives annually throughout the world. The continuous monitoring of blood pressure seems to be the most viable option, but this demands an invasive process, bringing about several layers of complexities. This motivates us to develop a method to predict the continuous arterial blood pressure (ABP) waveform through a non-invasive approach using photoplethysmogram (PPG) signals. In addition we explore the advantage of deep learning as it would free us from sticking to ideally shaped PPG signals only, by making handcrafted feature computation irrelevant, which is a shortcoming of the existing approaches. Thus, we present, PPG2ABP, a deep learning based method, that manages to predict the continuous ABP waveform from the input PPG signal, with a mean absolute error of 4.604 mmHg, preserving the shape, magnitude and phase in unison. However, the more astounding success of PPG2ABP turns out to be that the computed values of DBP, MAP and SBP from the predicted ABP waveform outperforms the existing works under several metrics, despite that PPG2ABP is not explicitly trained to do so.
We present different computational approaches for the rapid extraction of the signal parameters of discretely sampled damped sinusoidal signals. We compare time- and frequency-domain-based computational approaches in terms of their accuracy and precision and computational time required in estimating the frequencies of such signals, and observe a general trade-off between precision and speed. Our motivation is precise and rapid analysis of damped sinusoidal signals as these become relevant in view of the recent experimental developments in cavity-enhanced polarimetry and ellipsometry, where the relevant time scales and frequencies are typically within the $sim1-10,mu$s and $sim1-100$MHz ranges, respectively. In such experimental efforts, single-shot analysis with high accuracy and precision becomes important when developing experiments that study dynamical effects and/or when developing portable instrumentations. Our results suggest that online, running-fashion, microsecond-resolved analysis of polarimetric/ellipsometric measurements with fractional uncertainties at the $10^{-6}$ levels, is possible, and using a proof-of-principle experimental demonstration we show that using a frequency-based analysis approach we can monitor and analyze signals at kHz rates and accurately detect signal changes at microsecond time-scales.
Schizophrenia (SZ) is a mental disorder whereby due to the secretion of specific chemicals in the brain, the function of some brain regions is out of balance, leading to the lack of coordination between thoughts, actions, and emotions. This study provides various intelligent Deep Learning (DL)-based methods for automated SZ diagnosis via EEG signals. The obtained results are compared with those of conventional intelligent methods. In order to implement the proposed methods, the dataset of the Institute of Psychiatry and Neurology in Warsaw, Poland, has been used. First, EEG signals are divided into 25-seconds time frames and then were normalized by z-score or norm L2. In the classification step, two different approaches are considered for SZ diagnosis via EEG signals. In this step, the classification of EEG signals is first carried out by conventional DL methods, e.g., KNN, DT, SVM, Bayes, bagging, RF, and ET. Various proposed DL models, including LSTMs, 1D-CNNs, and 1D-CNN-LSTMs, are used in the following. In this step, the DL models were implemented and compared with different activation functions. Among the proposed DL models, the CNN-LSTM architecture has had the best performance. In this architecture, the ReLU activation function and the z-score and L2 combined normalization are used. The proposed CNN-LSTM model has achieved an accuracy percentage of 99.25%, better than the results of most former studies in this field. It is worth mentioning that in order to perform all simulations, the k-fold cross-validation method with k=5 has been used.
This paper presents a novel compressed sensing (CS) approach to high dimensional wireless channel estimation by optimizing the input to a deep generative network. Channel estimation using generative networks relies on the assumption that the reconstructed channel lies in the range of a generative model. Channel reconstruction using generative priors outperforms conventional CS techniques and requires fewer pilots. It also eliminates the need of a priori knowledge of the sparsifying basis, instead using the structure captured by the deep generative model as a prior. Using this prior, we also perform channel estimation from one-bit quantized pilot measurements, and propose a novel optimization objective function that attempts to maximize the correlation between the received signal and the generators channel estimate while minimizing the rank of the channel estimate. Our approach significantly outperforms sparse signal recovery methods such as Orthogonal Matching Pursuit (OMP) and Approximate Message Passing (AMP) algorithms such as EM-GM-AMP for narrowband mmWave channel reconstruction, and its execution time is not noticeably affected by the increase in the number of received pilot symbols.