إن تضمين الموضع النسبي (RPE) هو طريقة ناجحة لتشفير معلومات موقف مركزية وفعالة في نماذج المحولات.في هذه الورقة، نحقق في المشكلات المحتملة في Shaw-RPE و XL-RPE، والتي تعد أكثر من الممثلين والجلوب السائدة، واقتراح اثنين من روبيس رواية تسمى RPE الخشنة الخشنة الرفيعة المستوى الرفيع المستوى (LFHC)Gaussian وظيفة التوزيع التراكمي (GCDF) RPE.LFHC-RPE هو تحسن شو-RPE، مما يعزز قدرة التصور على المناصب النسبية المتوسطة والطويلة.تستخدم GCDF-RPE الخصائص الممتازة لوظيفة Gaussian لتعديل آلية الترميز السابقة في XL-RPE.النتائج التجريبية على تسعة مجموعات بيانات موثوقة تظهر فعالية أساليبنا تجريبيا.علاوة على ذلك، تحقق GCDF-RPE أفضل الأداء العام بين خمسة RPES مختلفة.
Relative position embedding (RPE) is a successful method to explicitly and efficaciously encode position information into Transformer models. In this paper, we investigate the potential problems in Shaw-RPE and XL-RPE, which are the most representative and prevalent RPEs, and propose two novel RPEs called Low-level Fine-grained High-level Coarse-grained (LFHC) RPE and Gaussian Cumulative Distribution Function (GCDF) RPE. LFHC-RPE is an improvement of Shaw-RPE, which enhances the perception ability at medium and long relative positions. GCDF-RPE utilizes the excellent properties of the Gaussian function to amend the prior encoding mechanism in XL-RPE. Experimental results on nine authoritative datasets demonstrate the effectiveness of our methods empirically. Furthermore, GCDF-RPE achieves the best overall performance among five different RPEs.
References used
https://aclanthology.org/
We present Graformer, a novel Transformer-based encoder-decoder architecture for graph-to-text generation. With our novel graph self-attention, the encoding of a node relies on all nodes in the input graph - not only direct neighbors - facilitating t
Due to efficient end-to-end training and fluency in generated texts, several encoder-decoder framework-based models are recently proposed for data-to-text generations. Appropriate encoding of input data is a crucial part of such encoder-decoder model
Incremental processing allows interactive systems to respond based on partial inputs, which is a desirable property e.g. in dialogue agents. The currently popular Transformer architecture inherently processes sequences as a whole, abstracting away th
This paper introduces data on translation trainees' perceptions of the MTPE process and implications on training in this field. This study aims to analyse trainees' performance of three MTPE tasks the English-Polish language pair and post-tasks inter
It has been widely recognized that syntax information can help end-to-end neural machine translation (NMT) systems to achieve better translation. In order to integrate dependency information into Transformer based NMT, existing approaches either expl