تم استخدام نماذج ترميز فك التشفير بشكل شائع للعديد من المهام مثل الترجمة الآلية وتوليد الاستجابة.كما ذكرت البحث السابق، تعاني هذه النماذج من توليد التكرار الزائد.في هذا البحث، نقترح آلية جديدة لنماذج تشفير التشفير التي تقدر الاختلاف الدلالي في جملة مصدر قبل وبعد تغذية في نموذج فك التشفير لالتقاط الاتساق بين الجانبين.تساعد هذه الآلية في تقليل الرموز التي تم إنشاؤها مرارا وتكرارا لمجموعة متنوعة من المهام.نتائج التقييم على مجموعات بيانات توليد الترجمة والاستجابة المتاحة للجمهورية توضح فعالية اقتراحنا.
Encoder-decoder models have been commonly used for many tasks such as machine translation and response generation. As previous research reported, these models suffer from generating redundant repetition. In this research, we propose a new mechanism for encoder-decoder models that estimates the semantic difference of a source sentence before and after being fed into the encoder-decoder model to capture the consistency between two sides. This mechanism helps reduce repeatedly generated tokens for a variety of tasks. Evaluation results on publicly available machine translation and response generation datasets demonstrate the effectiveness of our proposal.
References used
https://aclanthology.org/
The paper presents four models submitted to Part 2 of the SIGMORPHON 2021 Shared Task 0, which aims at replicating human judgements on the inflection of nonce lexemes. Our goal is to explore the usefulness of combining pre-compiled analogical pattern
In this paper, we present a new method for training a writing improvement model adapted to the writer's first language (L1) that goes beyond grammatical error correction (GEC). Without using annotated training data, we rely solely on pre-trained lang
We often use perturbations to regularize neural models. For neural encoder-decoders, previous studies applied the scheduled sampling (Bengio et al., 2015) and adversarial perturbations (Sato et al., 2019) as perturbations but these methods require co
Due to its effectiveness and performance, the Transformer translation model has attracted wide attention, most recently in terms of probing-based approaches. Previous work focuses on using or probing source linguistic features in the encoder. To date
Copy mechanisms explicitly obtain unchanged tokens from the source (input) sequence to generate the target (output) sequence under the neural seq2seq framework. However, most of the existing copy mechanisms only consider single word copying from the