بالنسبة للأطفال، أدى النظام المدرب على جثة كبيرة من مكبرات الصوت الكبار أسوأ من النظام المدربين على جثة أصغر بكثير من خطاب الأطفال.هذا بسبب عدم تطابق الصوت بين التدريب واختبار البيانات.لالتقاط المزيد من التقلبات الصوتية، قامنا بتدريب نظام مشترك مع بيانات مختلطة من البالغين والأطفال.ينجذب النظام المشترك إلى أفضل أغاني للأطفال دون تدهور للبالغين.وبالتالي، فإن النظام الفردي المدرب مع البيانات المختلطة ينطبق على التحقق من السماعات لكل من البالغين والأطفال.
For children, the system trained on a large corpus of adult speakers performed worse than a system trained on a much smaller corpus of children's speech. This is due to the acoustic mismatch between training and testing data. To capture more acoustic variability we trained a shared system with mixed data from adults and children. The shared system yields the best EER for children with no degradation for adults. Thus, the single system trained with mixed data is applicable for speaker verification for both adults and children.
References used
https://aclanthology.org/
In recent years, speech synthesis system can generate speech with high speech quality. However, multi-speaker text-to-speech (TTS) system still require large amount of speech data for each target speaker. In this study, we would like to construct a m
In cross-lingual language models, representations for many different languages live in the same space. Here, we investigate the linguistic and non-linguistic factors affecting sentence-level alignment in cross-lingual pretrained language models for 1
In this paper, we use domain generalization to improve the performance of the cross-device speaker verification system. Based on a trainable speaker verification system, we use domain generalization algorithms to fine-tune the model parameters. First
We present an overview of the SCIVER shared task, presented at the 2nd Scholarly Document Processing (SDP) workshop at NAACL 2021. In this shared task, systems were provided a scientific claim and a corpus of research abstracts, and asked to identify
Though word embeddings and topics are complementary representations, several past works have only used pretrained word embeddings in (neural) topic modeling to address data sparsity in short-text or small collection of documents. This work presents a