في هذه الورقة، نطبق تقطير المعرفة الذاتية لتلخيص النص الذي نقوله أنه يمكن أن يخفف من مشاكل في الحد الأقصى للتدريب احتمالية على مجموعات بيانات مرجعية واحدة وصاخبة.بدلا من الاعتماد على ملصقات توضيحية ذات ساخنة واحدة، يتم تدريب نموذج تلخيص الطلاب لدينا مع توجيهات من المعلم الذي يولد ملصقات سلاسة للمساعدة في تنظيم التدريب.علاوة على ذلك، لتحسين نموذج عدم اليقين أثناء التدريب، نقدم إشارات متعددة الضوضاء لكل من نماذج المعلم والطلاب.نوضح تجريبيا في ثلاثة معايير أن إطار عملائنا يعزز أداء كل من الملاحظات المحددة أو غير مسبوقة تحقيق نتائج حالة من الفنون.
In this paper we apply self-knowledge distillation to text summarization which we argue can alleviate problems with maximum-likelihood training on single reference and noisy datasets. Instead of relying on one-hot annotation labels, our student summarization model is trained with guidance from a teacher which generates smoothed labels to help regularize training. Furthermore, to better model uncertainty during training, we introduce multiple noise signals for both teacher and student models. We demonstrate experimentally on three benchmarks that our framework boosts the performance of both pretrained and non-pretrained summarizers achieving state-of-the-art results.
References used
https://aclanthology.org/
Knowledge Distillation (KD) is extensively used to compress and deploy large pre-trained language models on edge devices for real-world applications. However, one neglected area of research is the impact of noisy (corrupted) labels on KD. We present,
We study multilingual AMR parsing from the perspective of knowledge distillation, where the aim is to learn and improve a multilingual AMR parser by using an existing English parser as its teacher. We constrain our exploration in a strict multilingua
To reduce a model size but retain performance, we often rely on knowledge distillation (KD) which transfers knowledge from a large teacher'' model to a smaller student'' model. However, KD on multimodal datasets such as vision-language tasks is relat
Pretrained transformer-based encoders such as BERT have been demonstrated to achieve state-of-the-art performance on numerous NLP tasks. Despite their success, BERT style encoders are large in size and have high latency during inference (especially o
Recent studies argue that knowledge distillation is promising for speech translation (ST) using end-to-end models. In this work, we investigate the effect of knowledge distillation with a cascade ST using automatic speech recognition (ASR) and machin