حققت النماذج المستندة إلى المحولات المسببة للمحرسة مسبقا أداء حديثة لمختلف مهام معالجة اللغة الطبيعية (NLP).ومع ذلك، غالبا ما تكون هذه النماذج مليارات مليارات من المعلمات، وبالتالي فهي جائعة جدا للجوع وحسابات كثيفة لتناسب أجهزة أو تطبيقات منخفضة القدرة مع متطلبات زمنية صارمة.علاج واحد محتمل لهذا هو الضغط النموذجي، مما جذبت اهتماما كبيرا للبحث.هنا، نلخص البحث في ضغط المحولات، مع التركيز على نموذج بيرت الشهير بشكل خاص.على وجه الخصوص، نقوم بمسح حالة الفن في ضغط بيرت، نوضح أفضل الممارسات الحالية لضغط نماذج محولات واسعة النطاق، ونحن نقدم رؤى في أعمال أساليب مختلفة.يتم إلقاء تصنيفنا وتحليلنا الضوء على اتجاهات البحث المستقبلية الواعدة لتحقيق نماذج NLP خفيفة الوزن ودقيقة وأجنحة.
Abstract Pre-trained Transformer-based models have achieved state-of-the-art performance for various Natural Language Processing (NLP) tasks. However, these models often have billions of parameters, and thus are too resource- hungry and computation-intensive to suit low- capability devices or applications with strict latency requirements. One potential remedy for this is model compression, which has attracted considerable research attention. Here, we summarize the research in compressing Transformers, focusing on the especially popular BERT model. In particular, we survey the state of the art in compression for BERT, we clarify the current best practices for compressing large-scale Transformer models, and we provide insights into the workings of various methods. Our categorization and analysis also shed light on promising future research directions for achieving lightweight, accurate, and generic NLP models.
References used
https://aclanthology.org/
This work demonstrates the development process of a machine learning architecture for inference that can scale to a large volume of requests. We used a BERT model that was fine-tuned for emotion analysis, returning a probability distribution of emoti
Transformer-based neural networks offer very good classification performance across a wide range of domains, but do not provide explanations of their predictions. While several explanation methods, including SHAP, address the problem of interpreting
Current embedding-based large-scale retrieval models are trained with 0-1 hard label that indicates whether a query is relevant to a document, ignoring rich information of the relevance degree. This paper proposes to improve embedding-based retrieval
We present the ongoing NorLM initiative to support the creation and use of very large contextualised language models for Norwegian (and in principle other Nordic languages), including a ready-to-use software environment, as well as an experience repo
The embedding-based large-scale query-document retrieval problem is a hot topic in the information retrieval (IR) field. Considering that pre-trained language models like BERT have achieved great success in a wide variety of NLP tasks, we present a Q