نماذج المحولات هي التقليب equivariant.لتزويد الطلب واكتب معلومات الرموز المميزة والإدخال، عادة ما تتم إضافتها إلى المدخلات.تعمل الأعمال الأخيرة الاختلافات المقترحة من الترميزات الموضعية مع ترميزات الموضع النسبي تحقيق أداء أفضل.يوضح تحليلنا أن المكسب يأتي في الواقع من نقل المعلومات الموضعية إلى طبقة الاهتمام من المدخلات.بدافع من ذلك، نقدم اهتماما ممتما مطردا للمحولات (النظام الغذائي)، وهي آلية بسيطة ولكنها فعالة لتشفير معلومات الموقف والقطاع في نماذج المحولات.تتمتع الطريقة المقترحة بتدريب ووقت الاستدلال بشكل أسرع، مع تحقيق أداء تنافسي في معايير الغراء وإكستريم و WMT.نحن نعتبر أكثر تعميم طريقتنا للمحولات الطويلة المدى وإظهار مكاسب الأداء.
Transformer models are permutation equivariant. To supply the order and type information of the input tokens, position and segment embeddings are usually added to the input. Recent works proposed variations of positional encodings with relative position encodings achieving better performance. Our analysis shows that the gain actually comes from moving positional information to attention layer from the input. Motivated by this, we introduce Decoupled Positional Attention for Transformers (DIET), a simple yet effective mechanism to encode position and segment information into the Transformer models. The proposed method has faster training and inference time, while achieving competitive performance on GLUE, XTREME and WMT benchmarks. We further generalize our method to long-range transformers and show performance gain.
References used
https://aclanthology.org/
Obtaining affective response is a key step in building empathetic dialogue systems. This task has been studied a lot in generation-based chatbots, but the related research in retrieval-based chatbots is still in the early stage. Existing works in ret
Conditioned dialogue generation suffers from the scarcity of labeled responses. In this work, we exploit labeled non-dialogue text data related to the condition, which are much easier to collect. We propose a multi-task learning approach to leverage
Numeracy plays a key role in natural language understanding. However, existing NLP approaches, not only traditional word2vec approach or contextualized transformer-based language models, fail to learn numeracy. As the result, the performance of these
Transformer and its variants have achieved great success in natural language processing. Since Transformer models are huge in size, serving these models is a challenge for real industrial applications. In this paper, we propose , a highly efficient i
Transition systems usually contain various dynamic structures (e.g., stacks, buffers). An ideal transition-based model should encode these structures completely and efficiently. Previous works relying on templates or neural network structures either