في الآونة الأخيرة، جادل بأن نماذج تشفير التشفير يمكن أن تكون أكثر تفسيرا عن طريق استبدال وظيفة SoftMax بالاهتمام بمتغيراتها المتناقضة. في هذا العمل، نقدم رواية، وسيلة بسيطة لتحقيق Sparsity في الانتباه: استبدلنا تنشيط SoftMax مع Relu، وإظهار أن Sparsity يخرج بشكل طبيعي من مثل هذه الصياغة. يتم تحقيق استقرار التدريب بطبقة تطبيع إما إما بتهفية متخصصة أو وظيفة Gating إضافية. إن نموذجنا، الذي نسميه الاهتمام الخطي المعتمد (RELA)، سهل التنفيذ وأكثر كفاءة من آليات الاهتمام المتناقش سابقا سابقا. نحن نطبق RELLA إلى المحولات وإجراء تجارب على خمس مهام ترجمة آلية. recra تحقق أداء الترجمة مماثل للعديد من خطوط الأساس القوية، مع سرعة التدريب وتشكيل سرعة مماثلة للاهتمام الفانيليا. يوضح تحليلنا أن RELLA تقدم معدل مرتفع للغاية وتنوع الرأس، والاهتمام الصافي الناجم عن تحقيق دقة أفضل فيما يتعلق بمحاذاة الكلمة المستهدفة المصدر من النماذج القائمة على Softmax مؤخرا. تتعلم رؤساء RELA بشكل فعال أيضا حضور أي شيء (I.E. أطفئ ") لبعض الاستفسارات، وهو أمر غير ممكن مع بدائل Softmax Sparsified.
Recently, it has been argued that encoder-decoder models can be made more interpretable by replacing the softmax function in the attention with its sparse variants. In this work, we introduce a novel, simple method for achieving sparsity in attention: we replace the softmax activation with a ReLU, and show that sparsity naturally emerges from such a formulation. Training stability is achieved with layer normalization with either a specialized initialization or an additional gating function. Our model, which we call Rectified Linear Attention (ReLA), is easy to implement and more efficient than previously proposed sparse attention mechanisms. We apply ReLA to the Transformer and conduct experiments on five machine translation tasks. ReLA achieves translation performance comparable to several strong baselines, with training and decoding speed similar to that of the vanilla attention. Our analysis shows that ReLA delivers high sparsity rate and head diversity, and the induced cross attention achieves better accuracy with respect to source-target word alignment than recent sparsified softmax-based models. Intriguingly, ReLA heads also learn to attend to nothing (i.e. switch off') for some queries, which is not possible with sparsified softmax alternatives.
References used
https://aclanthology.org/
Self-attention has recently been adopted for a wide range of sequence modeling problems. Despite its effectiveness, self-attention suffers from quadratic computation and memory requirements with respect to sequence length. Successful approaches to re
Abstract Dual encoders perform retrieval by encoding documents and queries into dense low-dimensional vectors, scoring each document by its inner product with the query. We investigate the capacity of this architecture relative to sparse bag-of-words
To capture the semantic graph structure from raw text, most existing summarization approaches are built on GNNs with a pre-trained model. However, these methods suffer from cumbersome procedures and inefficient computations for long-text documents. T
In this work, we conduct a comprehensive investigation on one of the centerpieces of modern machine translation systems: the encoder-decoder attention mechanism. Motivated by the concept of first-order alignments, we extend the (cross-)attention mech
Many sequence-to-sequence tasks in natural language processing are roughly monotonic in the alignment between source and target sequence, and previous work has facilitated or enforced learning of monotonic attention behavior via specialized attention