تتمثل الوصفة الحالية لأداء نموذج أفضل داخل NLP في زيادة حجم نموذج البيانات والتدريب.في حين أن ذلك يعطينا نماذج مع نتائج رائعة بشكل متزايد، إلا أنها تجعل من الصعب تدريب ونشر نماذج أحدث ل NLP بسبب زيادة التكاليف الحاسوبية.ضغط النموذج هو مجال للبحث الذي يهدف إلى تخفيف هذه المشكلة.يشمل هذا المجال أساليب مختلفة تهدف إلى الحفاظ على أداء نموذج أثناء تقليل حجمها.واحدة من هذه الأسلوب هو تقطير المعرفة.في هذه المقالة، نحقق في تأثير تقطير المعرفة لنماذج التعرف على الكيان المسمى باللغة السويدية.نظهر أنه في حين أن بعض نماذج علامات التسلسل تستفيد من تقطير المعرفة، وليس كل النماذج تفعل.هذا يطالبنا بطرح أسئلة حول المواقف التي تنفجر المعرفة النماذج مفيدة.نحن أيضا السبب في تأثير تقطير المعرفة على التكاليف الحاسوبية.
The current recipe for better model performance within NLP is to increase model size and training data. While it gives us models with increasingly impressive results, it also makes it more difficult to train and deploy state-of-the-art models for NLP due to increasing computational costs. Model compression is a field of research that aims to alleviate this problem. The field encompasses different methods that aim to preserve the performance of a model while decreasing the size of it. One such method is knowledge distillation. In this article, we investigate the effect of knowledge distillation for named entity recognition models in Swedish. We show that while some sequence tagging models benefit from knowledge distillation, not all models do. This prompts us to ask questions about in which situations and for which models knowledge distillation is beneficial. We also reason about the effect of knowledge distillation on computational costs.
References used
https://aclanthology.org/
To reduce a model size but retain performance, we often rely on knowledge distillation (KD) which transfers knowledge from a large teacher'' model to a smaller student'' model. However, KD on multimodal datasets such as vision-language tasks is relat
In this paper we apply self-knowledge distillation to text summarization which we argue can alleviate problems with maximum-likelihood training on single reference and noisy datasets. Instead of relying on one-hot annotation labels, our student summa
Pretrained transformer-based encoders such as BERT have been demonstrated to achieve state-of-the-art performance on numerous NLP tasks. Despite their success, BERT style encoders are large in size and have high latency during inference (especially o
Recent studies argue that knowledge distillation is promising for speech translation (ST) using end-to-end models. In this work, we investigate the effect of knowledge distillation with a cascade ST using automatic speech recognition (ASR) and machin
Although pre-trained big models (e.g., BERT, ERNIE, XLNet, GPT3 etc.) have delivered top performance in Seq2seq modeling, their deployments in real-world applications are often hindered by the excessive computations and memory demand involved. For ma