مجردة نقدم نموذج لغة يجمع بين شبكة عصبية حديثة كبيرة (I.E.، محول) مع مكون ذاكرة Episodic غير حدودي غير رسمي في بنية متكاملة.يستخدم نموذجنا سياق موسع قصير الأجل من خلال التخزين المؤقت للدول المخفية المحلية - - مماثلة لذاكرة محول-XL --- وعلى المدى الطويل الأجل من خلال استرجاع مجموعة من أقرب رموز جار في كل ساعة عملية تجريفية.نقوم بتصميم وظيفة Gating للجمع بين مصادر معلومات متعددة لتقديم التنبؤ.تتيح هذه الآلية للطراز استخدام السياق المحلي أو الذاكرة قصيرة الأجل أو الذاكرة الطويلة الأجل (أو أي مزيج منهم) على أساس مخصص حسب السياق.تظهر تجارب مجموعات بيانات النمذجة القائمة على الكلمة القائمة على الكلمة فعالية طريقةنا المقترحة مقارنة مع خطوط الأساس القوية.
Abstract We present a language model that combines a large parametric neural network (i.e., a transformer) with a non-parametric episodic memory component in an integrated architecture. Our model uses extended short-term context by caching local hidden states---similar to transformer-XL---and global long-term memory by retrieving a set of nearest neighbor tokens at each timestep. We design a gating function to adaptively combine multiple information sources to make a prediction. This mechanism allows the model to use either local context, short-term memory, or long-term memory (or any combination of them) on an ad hoc basis depending on the context. Experiments on word-based and character-based language modeling datasets demonstrate the efficacy of our proposed method compared to strong baselines.
References used
https://aclanthology.org/
Large language models (LM) generate remarkably fluent text and can be efficiently adapted across NLP tasks. Measuring and guaranteeing the quality of generated text in terms of safety is imperative for deploying LMs in the real world; to this end, pr
Pre-trained multilingual language models have become an important building block in multilingual Natural Language Processing. In the present paper, we investigate a range of such models to find out how well they transfer discourse-level knowledge acr
Pretrained multilingual language models have become a common tool in transferring NLP capabilities to low-resource languages, often with adaptations. In this work, we study the performance, extensibility, and interaction of two such adaptations: voca
Representation learning for text via pretraining a language model on a large corpus has become a standard starting point for building NLP systems. This approach stands in contrast to autoencoders, also trained on raw text, but with the objective of l
Recent progress in language modeling has been driven not only by advances in neural architectures, but also through hardware and optimization improvements. In this paper, we revisit the neural probabilistic language model (NPLM) of Bengio et al. (200