في حين أن العديد من المحاولات قد بذلت لتحليل بناء الجملة والدلالات، فإن الأداء العالي في مجال واحد يأتي عادة بسعر الأداء في الآخر.يتناقض هذا المقارضة مع مجموعة الأبحاث الكبيرة التي تركز على التفاعلات الغنية في واجهة Syntax - Semantics.نستكشف هياكنات نموذجية متعددة تسمح لنا باستغلال التعليقات التوضيحية الغنية والمسلية الواردة في مجموعة بيانات دلالات التحلل العالمية (UDS)، مما أدى إلى تحليل التبعيات الشاملة والأمم المتحدة للحصول على نتائج حديثة في كل من الشكليات.نقوم بتحليل سلوك نموذج مشترك من بناء الجملة والدلالات، والعثور على أنماط تدعمها النظرية اللغوية في بناء جملة - واجهة دلالات.ثم نحقق في ما يعيد تصميم النمذجة المشتركة إلى حد كبير إلى إعداد متعدد اللغات، حيث نجد اتجاهات مماثلة عبر 8 لغات.
While numerous attempts have been made to jointly parse syntax and semantics, high performance in one domain typically comes at the price of performance in the other. This trade-off contradicts the large body of research focusing on the rich interactions at the syntax--semantics interface. We explore multiple model architectures that allow us to exploit the rich syntactic and semantic annotations contained in the Universal Decompositional Semantics (UDS) dataset, jointly parsing Universal Dependencies and UDS to obtain state-of-the-art results in both formalisms. We analyze the behavior of a joint model of syntax and semantics, finding patterns supported by linguistic theory at the syntax--semantics interface. We then investigate to what degree joint modeling generalizes to a multilingual setting, where we find similar trends across 8 languages.
References used
https://aclanthology.org/
We describe the second IWPT task on end-to-end parsing from raw text to Enhanced Universal Dependencies. We provide details about the evaluation metrics and the datasets used for training and evaluation. We compare the approaches taken by participating teams and discuss the results of the shared task, also in comparison with the first edition of this task.
This paper contributes to the thread of research on the learnability of different dependency annotation schemes: one (semantic') favouring content words as heads of dependency relations and the other (syntactic') favouring syntactic heads. Several st
Abstract We present a memory-based model for context- dependent semantic parsing. Previous approaches focus on enabling the decoder to copy or modify the parse from the previous utterance, assuming there is a dependency between the current and previo
While annotating normalized times in food security documents, we found that the semantically compositional annotation for time normalization (SCATE) scheme required several near-duplicate annotations to get the correct semantics for expressions like
Despite recent advances in semantic role labeling propelled by pre-trained text encoders like BERT, performance lags behind when applied to predicates observed infrequently during training or to sentences in new domains. In this work, we investigate