تشكل طبقات الأعلاف إلى الأمام ثلثي معلمات نموذج المحولات، لكن دورها في الشبكة لا تزال غير مستكشفة.نظرا لأن طبقة الأعلاف إلى الأمام في نماذج اللغة المحولات تعمل كذكريات ذات قيمة رئيسية، حيث يرتبط كل مفتاح بأنماط نصية في أمثلة التدريب، وكل قيمة تحفز توزيعا على مفردات الناتج.تبين تجاربنا أن الأنماط المستفادة قابلة للتفسير بشري، وأن الطبقات المنخفضة تميل إلى التقاط أنماط ضحلة، في حين تعلم الطبقات العليا تلك الدلالية أكثر.تكمل القيم أنماط إدخال المفاتيح من خلال تحفيز توزيعات الإخراج التي تركز كتلة الاحتمالية على الرموز المرجح أن تظهر مباشرة بعد كل نمط، خاصة في الطبقات العليا.أخيرا، نوضح أن إخراج طبقة الأعلاف إلى الأمام هو تكوين ذكرياتها، والتي تم تنصيرها لاحقا في جميع طبقات النموذج عبر الاتصالات المتبقية لإنتاج توزيع الناتج النهائي.
Feed-forward layers constitute two-thirds of a transformer model's parameters, yet their role in the network remains under-explored. We show that feed-forward layers in transformer-based language models operate as key-value memories, where each key correlates with textual patterns in the training examples, and each value induces a distribution over the output vocabulary. Our experiments show that the learned patterns are human-interpretable, and that lower layers tend to capture shallow patterns, while upper layers learn more semantic ones. The values complement the keys' input patterns by inducing output distributions that concentrate probability mass on tokens likely to appear immediately after each pattern, particularly in the upper layers. Finally, we demonstrate that the output of a feed-forward layer is a composition of its memories, which is subsequently refined throughout the model's layers via residual connections to produce the final output distribution.
References used
https://aclanthology.org/
This paper demonstrates that aggregating crowdsourced forecasts benefits from modeling the written justifications provided by forecasters. Our experiments show that the majority and weighted vote baselines are competitive, and that the written justif
Knowledge-intensive tasks such as question answering often require assimilating information from different sections of large inputs such as books or article collections. We propose ReadTwice, a simple and effective technique that combines several str
Due to its effectiveness and performance, the Transformer translation model has attracted wide attention, most recently in terms of probing-based approaches. Previous work focuses on using or probing source linguistic features in the encoder. To date
Transformer has achieved great success in the NLP field by composing various advanced models like BERT and GPT. However, Transformer and its existing variants may not be optimal in capturing token distances because the position or distance embeddings
Aspect-based sentiment analysis (ABSA) predicts the sentiment polarity towards a particular aspect term in a sentence, which is an important task in real-world applications. To perform ABSA, the trained model is required to have a good understanding