عند التحجيم إلى مئات مليارات مليارات المعلمات، فإن نماذج اللغة المحددة مسبقا مثل GPT-3 (Brown et al.، 2020) تحقق أداءا ملحوظا قليلا.ومع ذلك، فإن كميات هائلة من الحساب مطلوبة للتدريب وتطبيق هذه النماذج الكبيرة، مما أدى إلى بصمة كبيرة على الكربون وجعل من الصعب على الباحثين والممارسين استخدامها.نظهر أنه يمكن الحصول على الأداء المشابه ل GPT-3 مع طرازات اللغة أكثر خضرة "" في أن عدد المعلمات لديهم عدة أوامر من الحجم أصغر.يتم تحقيق ذلك من خلال تحويل المدخلات النصية إلى أسئلة كتين تحتوي على وصف مهمة، جنبا إلى جنب مع التحسين المستندة إلى التدرج؛إن استغلال البيانات غير المسبقة يمنح تحسينات إضافية.نحدد العوامل الرئيسية المطلوبة لفهم اللغة الطبيعية الناجحة مع نماذج لغة صغيرة.
When scaled to hundreds of billions of parameters, pretrained language models such as GPT-3 (Brown et al., 2020) achieve remarkable few-shot performance. However, enormous amounts of compute are required for training and applying such big models, resulting in a large carbon footprint and making it difficult for researchers and practitioners to use them. We show that performance similar to GPT-3 can be obtained with language models that are much greener'' in that their parameter count is several orders of magnitude smaller. This is achieved by converting textual inputs into cloze questions that contain a task description, combined with gradient-based optimization; exploiting unlabeled data gives further improvements. We identify key factors required for successful natural language understanding with small language models.
References used
https://aclanthology.org/
General-purpose language models have demonstrated impressive capabilities, performing on par with state-of-the-art approaches on a range of downstream natural language processing (NLP) tasks and benchmarks when inferring instructions from very few ex
In this paper, we study the utilization of pre-trained language models to enable few-shotNatural Language Generation (NLG) in task-oriented dialog systems. We introduce a system consisting of iterative self-training and an extensible mini-template fr
Natural Language Processing (NLP) is increasingly relying on general end-to-end systems that need to handle many different linguistic phenomena and nuances. For example, a Natural Language Inference (NLI) system has to recognize sentiment, handle num
Humans are capable of learning novel concepts from very few examples; in contrast, state-of-the-art machine learning algorithms typically need thousands of examples to do so. In this paper, we propose an algorithm for learning novel concepts by repre
This paper investigates the effectiveness of pre-training for few-shot intent classification. While existing paradigms commonly further pre-train language models such as BERT on a vast amount of unlabeled corpus, we find it highly effective and effic