يعتمد نموذج الترجمة المحول على آلية الاهتمام المتعدد الرأس، والتي يمكن توازتها بسهولة.تقوم شبكة الاهتمام المتعددة بالاهتمام بأداء وظيفة اهتمام المنتج DOT-Product المعزز بالتوازي، مما تمكن من تمكين النموذج من خلال حضور المعلومات المشتركة إلى معلومات من مختلف الفئات الفرعية التمثيلية في مواقف مختلفة.في هذه الورقة، نقدم نهجا لتعلم اهتمام استرجاع صعب حيث يحضر رأس الاهتمام فقط إلى رمز واحد في الجملة بدلا من جميع الرموز.وبالتالي، يمكن استبدال مضاعفة المصفوفة بين احتمالات الاهتمام وتسلسل القيمة في إيلاء اهتمام منتجات DOT-Product القياسية القياسية بعملية استرجاع بسيطة وفعالة.نظظ أن آلية اهتمام استرجاعها الثابت لدينا هي 1.43 مرة أسرع في فك التشفير، مع الحفاظ على جودة الترجمة على مجموعة واسعة من مهام الترجمة الآلية عند استخدامها في شبكات فك تشفير الذات والانتباه.
The Transformer translation model is based on the multi-head attention mechanism, which can be parallelized easily. The multi-head attention network performs the scaled dot-product attention function in parallel, empowering the model by jointly attending to information from different representation subspaces at different positions. In this paper, we present an approach to learning a hard retrieval attention where an attention head only attends to one token in the sentence rather than all tokens. The matrix multiplication between attention probabilities and the value sequence in the standard scaled dot-product attention can thus be replaced by a simple and efficient retrieval operation. We show that our hard retrieval attention mechanism is 1.43 times faster in decoding, while preserving translation quality on a wide range of machine translation tasks when used in the decoder self- and cross-attention networks.
References used
https://aclanthology.org/
Self-supervised learning has recently attracted considerable attention in the NLP community for its ability to learn discriminative features using a contrastive objective. This paper investigates whether contrastive learning can be extended to Transf
Ever since neural models were adopted in data-to-text language generation, they have invariably been reliant on extrinsic components to improve their semantic accuracy, because the models normally do not exhibit the ability to generate text that reli
In this work, we conduct a comprehensive investigation on one of the centerpieces of modern machine translation systems: the encoder-decoder attention mechanism. Motivated by the concept of first-order alignments, we extend the (cross-)attention mech
Natural question generation (QG) aims to generate questions from a passage, and generated questions are answered from the passage. Most models with state-of-the-art performance model the previously generated text at each decoding step. However, (1) t
Transformer models are permutation equivariant. To supply the order and type information of the input tokens, position and segment embeddings are usually added to the input. Recent works proposed variations of positional encodings with relative posit