No Arabic abstract
This paper presents a new solution for reconstructing missing data in power system measurements. An Enhanced Denoising Autoencoder (EDAE) is proposed to reconstruct the missing data through the input vector space reconstruction based on the neighbor values correlation and Long Short-Term Memory (LSTM) networks. The proposed LSTM-EDAE is able to remove the noise, extract principle features of the dataset, and reconstruct the missing information for new inputs. The paper shows that the utilization of neighbor correlation can perform better in missing data reconstruction. Trained with LSTM networks, the EDAE is more effective in coping with big data in power systems and obtains a better performance than the neural network in conventional Denoising Autoencoder. A random data sequence and the simulated Phasor Measurement Unit (PMU) data of power system are utilized to verify the effectiveness of the proposed LSTM-EDAE.
This letter introduces a new denoiser that modifies the structure of denoising autoencoder (DAE), namely noise learning based DAE (nlDAE). The proposed nlDAE learns the noise of the input data. Then, the denoising is performed by subtracting the regenerated noise from the noisy input. Hence, nlDAE is more effective than DAE when the noise is simpler to regenerate than the original data. To validate the performance of nlDAE, we provide three case studies: signal restoration, symbol demodulation, and precise localization. Numerical results suggest that nlDAE requires smaller latent space dimension and smaller training dataset compared to DAE.
In modern power systems, the Rate-of-Change-of-Frequency (ROCOF) may be largely employed in Wide Area Monitoring, Protection and Control (WAMPAC) applications. However, a standard approach towards ROCOF measurements is still missing. In this paper, we investigate the feasibility of Phasor Measurement Units (PMUs) deployment in ROCOF-based applications, with a specific focus on Under-Frequency Load-Shedding (UFLS). For this analysis, we select three state-of-the-art window-based synchrophasor estimation algorithms and compare different signal models, ROCOF estimation techniques and window lengths in datasets inspired by real-world acquisitions. In this sense, we are able to carry out a sensitivity analysis of the behavior of a PMU-based UFLS control scheme. Based on the proposed results, PMUs prove to be accurate ROCOF meters, as long as the harmonic and inter-harmonic distortion within the measurement pass-bandwidth is scarce. In the presence of transient events, the synchrophasor model looses its appropriateness as the signal energy spreads over the entire spectrum and cannot be approximated as a sequence of narrow-band components. Finally, we validate the actual feasibility of PMU-based UFLS in a real-time simulated scenario where we compare two different ROCOF estimation techniques with a frequency-based control scheme and we show their impact on the successful grid restoration.
Power system state estimation is heavily subjected to measurement error, which comes from the noise of measuring instruments, communication noise, and some unclear randomness. Traditional weighted least square (WLS), as the most universal state estimation method, attempts to minimize the residual between measurements and the estimation of measured variables, but it is unable to handle the measurement error. To solve this problem, based on random matrix theory, this paper proposes a data-driven approach to clean measurement error in matrix-level. Our method significantly reduces the negative effect of measurement error, and conducts a two-stage state estimation scheme combined with WLS. In this method, a Hermitian matrix is constructed to establish an invertible relationship between the eigenvalues of measurements and their covariance matrix. Random matrix tools, combined with an optimization scheme, are used to clean measurement error by shrinking the eigenvalues of the covariance matrix. With great robustness and generality, our approach is particularly suitable for large interconnected power grids. Our method has been numerically evaluated using different testing systems, multiple models of measured noise and matrix size ratios.
Deep learning-based models have greatly advanced the performance of speech enhancement (SE) systems. However, two problems remain unsolved, which are closely related to model generalizability to noisy conditions: (1) mismatched noisy condition during testing, i.e., the performance is generally sub-optimal when models are tested with unseen noise types that are not involved in the training data; (2) local focus on specific noisy conditions, i.e., models trained using multiple types of noises cannot optimally remove a specific noise type even though the noise type has been involved in the training data. These problems are common in real applications. In this paper, we propose a novel denoising autoencoder with a multi-branched encoder (termed DAEME) model to deal with these two problems. In the DAEME model, two stages are involved: training and testing. In the training stage, we build multiple component models to form a multi-branched encoder based on a decision tree (DSDT). The DSDT is built based on prior knowledge of speech and noisy conditions (the speaker, environment, and signal factors are considered in this paper), where each component of the multi-branched encoder performs a particular mapping from noisy to clean speech along the branch in the DSDT. Finally, a decoder is trained on top of the multi-branched encoder. In the testing stage, noisy speech is first processed by each component model. The multiple outputs from these models are then integrated into the decoder to determine the final enhanced speech. Experimental results show that DAEME is superior to several baseline models in terms of objective evaluation metrics, automatic speech recognition results, and quality in subjective human listening tests.
For speech-related applications in IoT environments, identifying effective methods to handle interference noises and compress the amount of data in transmissions is essential to achieve high-quality services. In this study, we propose a novel multi-input multi-output speech compression and enhancement (MIMO-SCE) system based on a convolutional denoising autoencoder (CDAE) model to simultaneously improve speech quality and reduce the dimensions of transmission data. Compared with conventional single-channel and multi-input single-output systems, MIMO systems can be employed in applications that handle multiple acoustic signals need to be handled. We investigated two CDAE models, a fully convolutional network (FCN) and a Sinc FCN, as the core models in MIMO systems. The experimental results confirm that the proposed MIMO-SCE framework effectively improves speech quality and intelligibility while reducing the amount of recording data by a factor of 7 for transmission.