في هذه الورقة، نحقق في مشكلة تعميم المجال (DG) للحصول على تحديد صياغة الإشراف (PI).نلاحظ أن أداء نماذج PI الحالية يتدهور بشكل كبير عند اختباره في مجال خارج التوزيع (OOD).نحن تخمين أنه ناجم عن التعلم الاختصار، أي هذه النماذج تميل إلى الاستفادة من الكلمات الإعلانية الفريدة لمجموعة بيانات أو مجال معين.لتخفيف هذه المشكلة وتعزيز قدرة DG، نقترح إطار PI بناء على النقل الأمثل (OT).تجبر طريقةنا على الشبكة لتعلم الميزات اللازمة لجميع الكلمات في الإدخال، مما يخفف من مشكلة التعلم الاختصار.تظهر النتائج التجريبية أن طريقتنا تعمل على تحسين قدرة DG على نماذج PI.
In this paper, we investigate the Domain Generalization (DG) problem for supervised Paraphrase Identification (PI). We observe that the performance of existing PI models deteriorates dramatically when tested in an out-of-distribution (OOD) domain. We conjecture that it is caused by shortcut learning, i.e., these models tend to utilize the cue words that are unique for a particular dataset or domain. To alleviate this issue and enhance the DG ability, we propose a PI framework based on Optimal Transport (OT). Our method forces the network to learn the necessary features for all the words in the input, which alleviates the shortcut learning problem. Experimental results show that our method improves the DG ability for the PI models.
References used
https://aclanthology.org/
Recent studies indicate that NLU models are prone to rely on shortcut features for prediction, without achieving true language understanding. As a result, these models fail to generalize to real-world out-of-distribution data. In this work, we show t
Paraphrase generation is an important task in natural language processing. Previous works focus on sentence-level paraphrase generation, while ignoring document-level paraphrase generation, which is a more challenging and valuable task. In this paper
Paraphrase identification (PI), a fundamental task in natural language processing, is to identify whether two sentences express the same or similar meaning, which is a binary classification problem. Recently, BERT-like pre-trained language models hav
Exemplar-Guided Paraphrase Generation (EGPG) aims to generate a target sentence which conforms to the style of the given exemplar while encapsulating the content information of the source sentence. In this paper, we propose a new method with the goal
Paraphrase generation is a longstanding NLP task that has diverse applications on downstream NLP tasks. However, the effectiveness of existing efforts predominantly relies on large amounts of golden labeled data. Though unsupervised endeavors have be