منذ فترة طويلة تم حل الضمائر إلى مراحلها كمشكلة فهم لغة طبيعية أساسية.تعمل سابقا على قرار الضمير (PCR) في الغالب على حل الضمائر للإشارة في النص أثناء تجاهل السيناريو الإيفهور.الضمائر Exophoric شائعة في الاتصالات اليومية، حيث قد يستخدم المتحدثون بشكل مباشر الضمائر للإشارة إلى بعض الكائنات الموجودة في البيئة دون إدخال الكائنات أولا.على الرغم من عدم ذكر هذه الكائنات في نص الحوار، إلا أنها غالبا ما يتم ترحيلها من قبل الموضوعات العامة للحوار.بدافع من ذلك، نقترح الاستفادة بشكل مشترك السياق المحلي والمواضيع العالمية للحوارات لحل مشكلة PCR خارج النص.توضح تجارب واسعة فعالية إضافة موضوع تنظيم الموضوع لحل الضمائر الإيفولية.
Resolving pronouns to their referents has long been studied as a fundamental natural language understanding problem. Previous works on pronoun coreference resolution (PCR) mostly focus on resolving pronouns to mentions in text while ignoring the exophoric scenario. Exophoric pronouns are common in daily communications, where speakers may directly use pronouns to refer to some objects present in the environment without introducing the objects first. Although such objects are not mentioned in the dialogue text, they can often be disambiguated by the general topics of the dialogue. Motivated by this, we propose to jointly leverage the local context and global topics of dialogues to solve the out-of-text PCR problem. Extensive experiments demonstrate the effectiveness of adding topic regularization for resolving exophoric pronouns.
References used
https://aclanthology.org/
Masked language models (MLMs) have contributed to drastic performance improvements with regard to zero anaphora resolution (ZAR). To further improve this approach, in this study, we made two proposals. The first is a new pretraining task that trains
Natural language generation (NLG) tasks on pro-drop languages are known to suffer from zero pronoun (ZP) problems, and the problems remain challenging due to the scarcity of ZP-annotated NLG corpora. In this case, we propose a highly adaptive two-sta
Dialogue topic segmentation is critical in several dialogue modeling problems. However, popular unsupervised approaches only exploit surface features in assessing topical coherence among utterances. In this work, we address this limitation by leverag
Multilingual pretrained representations generally rely on subword segmentation algorithms to create a shared multilingual vocabulary. However, standard heuristic algorithms often lead to sub-optimal segmentation, especially for languages with limited
The automatic evaluation of open-domain dialogues remains a largely unsolved challenge. Despite the abundance of work done in the field, human judges have to evaluate dialogues' quality. As a consequence, performing such evaluations at scale is usual