يتم استخدام تقطير المعرفة (KD) على نطاق واسع في معالجة اللغة الطبيعية لضغط مراحل ما قبل التدريب والضبط المهام الموحد من نماذج اللغة العصبية الكبيرة.يتم تدريب نموذج طالب على تقليل مجموعة محدبة من فقدان التنبؤ عبر الملصقات وآخر على إخراج المعلم.ومع ذلك، فإن معظم الأعمال القائمة إما إصلاح الوزن الاستيفاء بين الخسائرين Apriori أو تختلف الوزن باستخدام الاستدلال.في هذا العمل، نقترح طريقة ترجيح عينة من الخسارة العينة، RW-KD.المتعلم التلوي، مدرب في وقت واحد مع الطالب، إعادة الوزن بشكل متكامل الخسائرتين لكل عينة.نوضح، في 7 مجموعات بيانات من مؤشر الغراء، أن RW-KD تفوقت طرق إعادة توزيع الخسارة الأخرى لدعم KD.
Knowledge Distillation (KD) is extensively used in Natural Language Processing to compress the pre-training and task-specific fine-tuning phases of large neural language models. A student model is trained to minimize a convex combination of the prediction loss over the labels and another over the teacher output. However, most existing works either fix the interpolating weight between the two losses apriori or vary the weight using heuristics. In this work, we propose a novel sample-wise loss weighting method, RW-KD. A meta-learner, simultaneously trained with the student, adaptively re-weights the two losses for each sample. We demonstrate, on 7 datasets of the GLUE benchmark, that RW-KD outperforms other loss re-weighting methods for KD.
References used
https://aclanthology.org/
We present an efficient training approach to text retrieval with dense representations that applies knowledge distillation using the ColBERT late-interaction ranking model. Specifically, we propose to transfer the knowledge from a bi-encoder teacher
Intermediate layer matching is shown as an effective approach for improving knowledge distillation (KD). However, this technique applies matching in the hidden spaces of two different networks (i.e. student and teacher), which lacks clear interpretab
Word embedding is essential for neural network models for various natural language processing tasks. Since the word embedding usually has a considerable size, in order to deploy a neural network model having it on edge devices, it should be effective
In this paper we apply self-knowledge distillation to text summarization which we argue can alleviate problems with maximum-likelihood training on single reference and noisy datasets. Instead of relying on one-hot annotation labels, our student summa
To reduce a model size but retain performance, we often rely on knowledge distillation (KD) which transfers knowledge from a large teacher'' model to a smaller student'' model. However, KD on multimodal datasets such as vision-language tasks is relat