تجادل الدراسات الحديثة بأن تقطير المعرفة يعد إلى ترجمة الكلام (ST) باستخدام النماذج الطرفية إلى النهاية.في هذا العمل، يمكننا التحقيق في تأثير تقطير المعرفة مع SC CASCADE باستخدام نماذج الترجمة التلقائية للكلام (ASR) ونماذج الترجمة الآلية (MT).نحن نوفر المعرفة من طراز المعلم بناء على النصوص البشرية لطراز الطلاب بناء على النسخ الخاطئة.أثبتت نتائجنا التجريبية أن تقطير المعرفة مفيد لشارع Cascade.كشف مزيد من التحقيق الذي يجمع تقطير المعرفة والضبط بشكل جيد أن الجمع بين اثنين من أزواج اللغة: الإنجليزية - الإيطالية والإسبانية الإنجليزية.
Recent studies argue that knowledge distillation is promising for speech translation (ST) using end-to-end models. In this work, we investigate the effect of knowledge distillation with a cascade ST using automatic speech recognition (ASR) and machine translation (MT) models. We distill knowledge from a teacher model based on human transcripts to a student model based on erroneous transcriptions. Our experimental results demonstrated that knowledge distillation is beneficial for a cascade ST. Further investigation that combined knowledge distillation and fine-tuning revealed that the combination consistently improved two language pairs: English-Italian and Spanish-English.
References used
https://aclanthology.org/
A conventional approach to improving the performance of end-to-end speech translation (E2E-ST) models is to leverage the source transcription via pre-training and joint training with automatic speech recognition (ASR) and neural machine translation (
To reduce a model size but retain performance, we often rely on knowledge distillation (KD) which transfers knowledge from a large teacher'' model to a smaller student'' model. However, KD on multimodal datasets such as vision-language tasks is relat
Although pre-trained big models (e.g., BERT, ERNIE, XLNet, GPT3 etc.) have delivered top performance in Seq2seq modeling, their deployments in real-world applications are often hindered by the excessive computations and memory demand involved. For ma
In this paper we apply self-knowledge distillation to text summarization which we argue can alleviate problems with maximum-likelihood training on single reference and noisy datasets. Instead of relying on one-hot annotation labels, our student summa
Pretrained transformer-based encoders such as BERT have been demonstrated to achieve state-of-the-art performance on numerous NLP tasks. Despite their success, BERT style encoders are large in size and have high latency during inference (especially o