إلى جانب رزق BiAffine، تم تكييف المحولات بفعالية مع تحويل الرسائل النصية وحققت أداء حالة من الفن على تحليل عمرو.ومع ذلك، فإن العديد من الأعمال السابقة تعتمد على فك تشفير BiAffine لأي منهما أو كلا من القوس والملصقات على الرغم من أن معظم الميزات المستخدمة من قبل وحدة فك الترميز قد تتعلم من قبل المحول بالفعل.تقدم هذه الورقة نهجا جديدا لتحليل عمرو من خلال الجمع بين البيانات غير المتجانسة (الرموز والمفاهيم والملصقات) كإدخال واحد إلى محول لتعلم الانتباه، واستخدام مصفوفات الاهتمام فقط من المحول للتنبؤ بجميع العناصر في الرسوم البيانية AMR (المفاهيم، الأقواس،تسميات).على الرغم من أن نماذجنا تستخدم معلمات أقل بكثير من محلل الرسم البياني للحالة السابقة، فإنها تظهر دقة مماثلة أو أفضل على عمرو 2.0 و 3.0.
Coupled with biaffine decoders, transformers have been effectively adapted to text-to-graph transduction and achieved state-of-the-art performance on AMR parsing. Many prior works, however, rely on the biaffine decoder for either or both arc and label predictions although most features used by the decoder may be learned by the transformer already. This paper presents a novel approach to AMR parsing by combining heterogeneous data (tokens, concepts, labels) as one input to a transformer to learn attention, and use only attention matrices from the transformer to predict all elements in AMR graphs (concepts, arcs, labels). Although our models use significantly fewer parameters than the previous state-of-the-art graph parser, they show similar or better accuracy on AMR 2.0 and 3.0.
References used
https://aclanthology.org/
To capture the semantic graph structure from raw text, most existing summarization approaches are built on GNNs with a pre-trained model. However, these methods suffer from cumbersome procedures and inefficient computations for long-text documents. T
Abstract Several metrics have been proposed for assessing the similarity of (abstract) meaning representations (AMRs), but little is known about how they relate to human similarity ratings. Moreover, the current metrics have complementary strengths a
AMR (Abstract Meaning Representation) and EDS (Elementary Dependency Structures) are two popular meaning representations in NLP/NLU. AMR is more abstract and conceptual, while EDS is more low level, closer to the lexical structures of the given sente
External syntactic and semantic information has been largely ignored by existing neural coreference resolution models. In this paper, we present a heterogeneous graph-based model to incorporate syntactic and semantic structures of sentences. The prop
We present our contribution to the IWPT 2021 shared task on parsing into enhanced Universal Dependencies. Our main system component is a hybrid tree-graph parser that integrates (a) predictions of spanning trees for the enhanced graphs with (b) addit