أصبح تمثيل المعنى الملخص (AMR) شعبية لتمثيل معنى اللغة الطبيعية في هياكل الرسم البياني.ومع ذلك، لا يمثل AMR معلومات النطاق، مما يشكل مشكلة في التعبير الشامل وعلى وجه التحديد من أجل الاستدلالات من البيانات المنفذة.هذا هو الحال مع ما يسمى بتفسير إيجابي "من البيانات المنفذة، والتي يتم تحديد معنى إيجابي ضمني من خلال استنتاج تركيز النفي.في هذا العمل، يمكننا التحقيق في كيفية تمثيل التفسيرات الإيجابية المحتملة (PPIS) في عمرو.نقترح بنية AMR ذات دوافع منطقية ل PPIS التي تجعل تركيز النفي صريحا ورسم اقتراحا أوليا لمنهجية منهجية لتوليد هذه الهيكل التعبيري.
Abstract Meaning Representation (AMR) has become popular for representing the meaning of natural language in graph structures. However, AMR does not represent scope information, posing a problem for its overall expressivity and specifically for drawing inferences from negated statements. This is the case with so-called positive interpretations'' of negated statements, in which implicit positive meaning is identified by inferring the opposite of the negation's focus. In this work, we investigate how potential positive interpretations (PPIs) can be represented in AMR. We propose a logically motivated AMR structure for PPIs that makes the focus of negation explicit and sketch an initial proposal for a systematic methodology to generate this more expressive structure.
References used
https://aclanthology.org/
The field of NLP has made substantial progress in building meaning representations. However, an important aspect of linguistic meaning, social meaning, has been largely overlooked. We introduce the concept of social meaning to NLP and discuss how insights from sociolinguistics can inform work on representation learning in NLP. We also identify key challenges for this new line of research.
Translation divergences are varied and widespread, challenging approaches that rely on parallel text. To annotate translation divergences, we propose a schema grounded in the Abstract Meaning Representation (AMR), a sentence-level semantic framework
Abstract Meaning Representation parsing is a sentence-to-graph prediction task where target nodes are not explicitly aligned to sentence tokens. However, since graph nodes are semantically based on one or more sentence tokens, implicit alignments can
NLP systems rarely give special consideration to numbers found in text. This starkly contrasts with the consensus in neuroscience that, in the brain, numbers are represented differently from words. We arrange recent NLP work on numeracy into a compre
We study multilingual AMR parsing from the perspective of knowledge distillation, where the aim is to learn and improve a multilingual AMR parser by using an existing English parser as its teacher. We constrain our exploration in a strict multilingua