تعلم نماذج اللغة العصبية غير المعلمة (NLMS) التوزيعات التنبؤية للنص باستخدام مصدر بيانات خارجي، والذي يسمح لهم بالتعلم من خلال حفظ مخطط Datapooints التدريبي بشكل صريح.في حين أن هذه النماذج فعالة، فإن هذه النماذج غالبا ما تتطلب استرجاع من مؤشرات بيانات كبيرة في وقت الاختبار، مما يزيد بشكل كبير من تسليم الاستدلال، وبالتالي يحد من نشر NLMS غير المعلم في التطبيقات العملية.في هذه الورقة، نأخذ نموذج لغة Geature K-Neave المقترح مؤخرا كمثال، استكشاف الطرق لتحسين كفاءتها على طول الأبعاد المختلفة.تبين التجارب في معيار Wikitext-103 القياسي ومجموعات بيانات التكيف عن المجال أن أساليبنا قادرة على تحقيق ما يصل إلى سرعة 6X في سرعة الاستدلال مع الاحتفاظ بأداء مماثل.قد يوفر التحليل التجريبي الذي نقدمه مبادئ توجيهية للبحث في المستقبل يسعى إلى تطوير أو نشر أكثر كفاءة غير رسمية غير رسمية.
Non-parametric neural language models (NLMs) learn predictive distributions of text utilizing an external datastore, which allows them to learn through explicitly memorizing the training datapoints. While effective, these models often require retrieval from a large datastore at test time, significantly increasing the inference overhead and thus limiting the deployment of non-parametric NLMs in practical applications. In this paper, we take the recently proposed k-nearest neighbors language model as an example, exploring methods to improve its efficiency along various dimensions. Experiments on the standard WikiText-103 benchmark and domain-adaptation datasets show that our methods are able to achieve up to a 6x speed-up in inference speed while retaining comparable performance. The empirical analysis we present may provide guidelines for future research seeking to develop or deploy more efficient non-parametric NLMs.
References used
https://aclanthology.org/
Emotion detection is an important task that can be applied to social media data to discover new knowledge. While the use of deep learning methods for this task has been prevalent, they are black-box models, making their decisions hard to interpret fo
Transformer-based language models (LMs) pretrained on large text collections are proven to store a wealth of semantic knowledge. However, 1) they are not effective as sentence encoders when used off-the-shelf, and 2) thus typically lag behind convers
We propose to tackle data-to-text generation tasks by directly splicing together retrieved segments of text from neighbor'' source-target pairs. Unlike recent work that conditions on retrieved neighbors but generates text token-by-token, left-to-righ
Can pre-trained BERT for one language and GPT for another be glued together to translate texts? Self-supervised training using only monolingual data has led to the success of pre-trained (masked) language models in many NLP tasks. However, directly c
Masked language modeling (MLM) is one of the key sub-tasks in vision-language pretraining. In the cross-modal setting, tokens in the sentence are masked at random, and the model predicts the masked tokens given the image and the text. In this paper,