ﻻ يوجد ملخص باللغة العربية
This technical report describes our system that is submitted to the Deep Noise Suppression Challenge and presents the results for the non-real-time track. To refine the estimation results stage by stage, we utilize recursive learning, a type of training protocol which aggravates the information through multiple stages with a memory mechanism. The attention generator network is designed to dynamically control the feature distribution of the noise reduction network. To improve the phase recovery accuracy, we take the complex spectral mapping procedure by decoding both real and imaginary spectra. For the final blind test set, the average MOS improvements of the submitted system in noreverb, reverb, and realrec categories are 0.49, 0.24, and 0.36, respectively.
A person tends to generate dynamic attention towards speech under complicated environments. Based on this phenomenon, we propose a framework combining dynamic attention and recursive learning together for monaural speech enhancement. Apart from a maj
It remains a tough challenge to recover the speech signals contaminated by various noises under real acoustic environments. To this end, we propose a novel system for denoising in the complicated applications, which is mainly comprised of two pipelin
The INTERSPEECH 2020 Deep Noise Suppression Challenge is intended to promote collaborative research in real-time single-channel Speech Enhancement aimed to maximize the subjective (perceptual) quality of the enhanced speech. A typical approach to eva
In this paper, we presents a low-complexity deep learning frameworks for acoustic scene classification (ASC). The proposed framework can be separated into three main steps: Front-end spectrogram extraction, back-end classification, and late fusion of
Cardiovascular diseases are the leading cause of deaths and severely threaten human health in daily life. On the one hand, there have been dramatically increasing demands from both the clinical practice and the smart home application for monitoring t