يتم استخدام تقطير المعرفة (KD) على نطاق واسع لضغط ونشر نماذج لغة كبيرة مدربة مسبقا على أجهزة EDGE لتطبيقات العالم الحقيقي.ومع ذلك، فإن مساحة البحث واحدة مهملة هي تأثير الملصقات الصاخبة (التالفة) على KD.نقدم، إلى حد علمنا، أول دراسة حول الملكية الدماغية مع ملصقات صاخبة في فهم اللغة الطبيعية (NLU).نحن توثق نطاق المشكلة وتقديم طريقتين لتخفيف تأثير ضوضاء التسمية.تشير التجارب على مرجع الغراء إلى أن أساليبنا فعالة حتى تحت مستويات ضوضاء عالية.ومع ذلك، تشير نتائجنا إلى أن المزيد من البحث ضروري للتعامل مع ضجيج الملصقات تحت KD.
Knowledge Distillation (KD) is extensively used to compress and deploy large pre-trained language models on edge devices for real-world applications. However, one neglected area of research is the impact of noisy (corrupted) labels on KD. We present, to the best of our knowledge, the first study on KD with noisy labels in Natural Language Understanding (NLU). We document the scope of the problem and present two methods to mitigate the impact of label noise. Experiments on the GLUE benchmark show that our methods are effective even under high noise levels. Nevertheless, our results indicate that more research is necessary to cope with label noise under the KD.
References used
https://aclanthology.org/
We study multilingual AMR parsing from the perspective of knowledge distillation, where the aim is to learn and improve a multilingual AMR parser by using an existing English parser as its teacher. We constrain our exploration in a strict multilingua
In this paper we apply self-knowledge distillation to text summarization which we argue can alleviate problems with maximum-likelihood training on single reference and noisy datasets. Instead of relying on one-hot annotation labels, our student summa
Existing pre-trained language models (PLMs) have demonstrated the effectiveness of self-supervised learning for a broad range of natural language processing (NLP) tasks. However, most of them are not explicitly aware of domain-specific knowledge, whi
We present a simple yet effective Targeted Adversarial Training (TAT) algorithm to improve adversarial training for natural language understanding. The key idea is to introspect current mistakes and prioritize adversarial training steps to where the
To reduce a model size but retain performance, we often rely on knowledge distillation (KD) which transfers knowledge from a large teacher'' model to a smaller student'' model. However, KD on multimodal datasets such as vision-language tasks is relat