End-to-End Classification of Reverberant Rooms using DNNs


الملخص بالإنكليزية

Reverberation is present in our workplaces, our homes, concert halls and theatres. This paper investigates how deep learning can use the effect of reverberation on speech to classify a recording in terms of the room in which it was recorded. Existing approaches in the literature rely on domain expertise to manually select acoustic parameters as inputs to classifiers. Estimation of these parameters from reverberant speech is adversely affected by estimation errors, impacting the classification accuracy. In order to overcome the limitations of previously proposed methods, this paper shows how DNNs can perform the classification by operating directly on reverberant speech spectra and a CRNN with an attention-mechanism is proposed for the task. The relationship is investigated between the reverberant speech representations learned by the DNNs and acoustic parameters. For evaluation, AIRs are used from the ACE-challenge dataset that were measured in 7 real rooms. The classification accuracy of the CRNN classifier in the experiments is 78% when using 5 hours of training data and 90% when using 10 hours.

تحميل البحث