كان التقدم المحرز الأخير في نمذجة اللغة مدفوعة ليس فقط بالتقدم في البنيات العصبية، ولكن أيضا من خلال تحسين الأجهزة والتحسين.في هذه الورقة، نؤيد نموذج اللغة الاحتمالية العصبية (NPLM) من بنغيو وآخرون.(2003)، والتي تسلسل ببساطة تضمين كلمة داخل نافذة ثابتة ويمرر النتيجة من خلال شبكة تغذية إلى الأمام للتنبؤ بالكلمة التالية.عند القياس حتى الأجهزة الحديثة، يؤدي هذا النموذج (على الرغم من قيودها العديدة) أفضل بكثير مما كان متوقعا عن معايير نموذج اللغة على مستوى Word.يكشف تحليلنا أن NPLM يحقق حيرة أقل من محول الأساس مع سياقات مدخلات قصيرة ولكن تكافح للتعامل مع تبعيات طويلة الأجل.مستوحاة من هذه النتيجة، نقوم بتعديل المحول عن طريق استبدال طبقة انتباهي أول مع طبقة التسلسل المحلية في NPLM، مما يؤدي إلى انخفاض حيرة صغيرة ولكنها ثابتة عبر مجموعات بيانات نمذجة لغة مستوى الكلمات.
Recent progress in language modeling has been driven not only by advances in neural architectures, but also through hardware and optimization improvements. In this paper, we revisit the neural probabilistic language model (NPLM) of Bengio et al. (2003), which simply concatenates word embeddings within a fixed window and passes the result through a feed-forward network to predict the next word. When scaled up to modern hardware, this model (despite its many limitations) performs much better than expected on word-level language model benchmarks. Our analysis reveals that the NPLM achieves lower perplexity than a baseline Transformer with short input contexts but struggles to handle long-term dependencies. Inspired by this result, we modify the Transformer by replacing its first self-attention layer with the NPLM's local concatenation layer, which results in small but consistent perplexity decreases across three word-level language modeling datasets.
References used
https://aclanthology.org/
Policy gradient algorithms have found wide adoption in NLP, but have recently become subject to criticism, doubting their suitability for NMT. Choshen et al. (2020) identify multiple weaknesses and suspect that their success is determined by the shap
Pretrained language models have served as the backbone for many state-of-the-art NLP results. These models are large and expensive to train. Recent work suggests that continued pretraining on task-specific data is worth the effort as pretraining lead
We present Hidden-State Optimization (HSO), a gradient-based method for improving the performance of transformer language models at inference time. Similar to dynamic evaluation (Krause et al., 2018), HSO computes the gradient of the log-probability
Emotion Classification is the task of automatically associating a text with a human emotion. State-of-the-art models are usually learned using annotated corpora or rely on hand-crafted affective lexicons. We present an emotion classification model th
When building machine translation systems, one often needs to make the best out of heterogeneous sets of parallel data in training, and to robustly handle inputs from unexpected domains in testing. This multi-domain scenario has attracted a lot of re