Multi-Scale Temporal Convolution Network for Classroom Voice Detection


الملخص بالإنكليزية

Teaching with the cooperation of expert teacher and assistant teacher, which is the so-called double-teachers classroom, i.e., the course is giving by the expert online and presented through projection screen at the classroom, and the teacher at the classroom performs as an assistant for guiding the students in learning, is becoming more prevalent in todays teaching method for K-12 education. For monitoring the teaching quality, a microphone clipped on the assistants neckline is always used for voice recording, then fed to the downstream tasks of automatic speech recognition (ASR) and neural language processing (NLP). However, besides its voice, there would be some other interfering voices, including the experts one and the students one. Here, we propose to extract the assistant voices from the perspective of sound event detection, i.e., the voices are classified into four categories, namely the expert, the teacher, the mixture of them, and the background. To make frame-level identification, which is important for grabbing sensitive words for the downstream tasks, a multi-scale temporal convolution neural network is constructed with stacked dilated convolutions for considering both local and global properties. These features are concatenated and fed to a classification network constructed by three linear layers. The framework is evaluated on simulated data and real-world recordings, giving considerable performance in terms of precision and recall, compared with some classical classification methods.

تحميل البحث