Waveform-based Voice Activity Detection Exploiting Fully Convolutional networks with Multi-Branched Encoders


Abstract in English

In this study, we propose an encoder-decoder structured system with fully convolutional networks to implement voice activity detection (VAD) directly on the time-domain waveform. The proposed system processes the input waveform to identify its segments to be either speech or non-speech. This novel waveform-based VAD algorithm, with a short-hand notation WVAD, has two main particularities. First, as compared to most conventional VAD systems that use spectral features, raw-waveforms employed in WVAD contain more comprehensive information and thus are supposed to facilitate more accurate speech/non-speech predictions. Second, based on the multi-branched architecture, WVAD can be extended by using an ensemble of encoders, referred to as WEVAD, that incorporate multiple attribute information in utterances, and thus can yield better VAD performance for specified acoustic conditions. We evaluated the presented WVAD and WEVAD for the VAD task in two datasets: First, the experiments conducted on AURORA2 reveal that WVAD outperforms many state-of-the-art VAD algorithms. Next, the TMHINT task confirms that through combining multiple attributes in utterances, WEVAD behaves even better than WVAD.

Download