AM تحليل التبعية هي طريقة لتحليل الرسم البياني الدلالي العصبي الذي يستغل مبدأ التركيبية.على الرغم من أن محلل التبعية، فقد تبين أن محلل التبعية سريعة ودقيقة عبر العديد من الرسوم البيانية، فإنها تتطلب عبائيات صريحة لهياكل الأشجار التركيبية للتدريب.في الماضي، تم الحصول على هؤلاء استخدام الاستدلال المعقدة من الرسوم المشتركة من قبل الخبراء.هنا نظهر كيف يمكن تدريبهم بدلا من ذلك مباشرة على الرسوم البيانية مع نموذج متغير كامنة عصبي، مما يقلل بشكل كبير من كمية وتعقيد الاستدلال اليدوي.نوضح أن نموذجنا يلتقط العديد من الظواهر اللغوية بمفرده وتحقق دقة مماثلة للتدريب الخاضع للإشراف، مما يسهل بشكل كبير استخدام تحليل التبعية لشبانس جديدة.
AM dependency parsing is a method for neural semantic graph parsing that exploits the principle of compositionality. While AM dependency parsers have been shown to be fast and accurate across several graphbanks, they require explicit annotations of the compositional tree structures for training. In the past, these were obtained using complex graphbank-specific heuristics written by experts. Here we show how they can instead be trained directly on the graphs with a neural latent-variable model, drastically reducing the amount and complexity of manual heuristics. We demonstrate that our model picks up on several linguistic phenomena on its own and achieves comparable accuracy to supervised training, greatly facilitating the use of AM dependency parsing for new sembanks.
References used
https://aclanthology.org/
The dominant paradigm for semantic parsing in recent years is to formulate parsing as a sequence-to-sequence task, generating predictions with auto-regressive sequence decoders. In this work, we explore an alternative paradigm. We formulate semantic
Synthesizing data for semantic parsing has gained increasing attention recently. However, most methods require handcrafted (high-precision) rules in their generative process, hindering the exploration of diverse unseen data. In this work, we propose
Frame semantic parsing is a semantic analysis task based on FrameNet which has received great attention recently. The task usually involves three subtasks sequentially: (1) target identification, (2) frame classification and (3) semantic role labelin
Graph-based semantic parsing aims to represent textual meaning through directed graphs. As one of the most promising general-purpose meaning representations, these structures and their parsing have gained a significant interest momentum during recent
We describe a span-level supervised attention loss that improves compositional generalization in semantic parsers. Our approach builds on existing losses that encourage attention maps in neural sequence-to-sequence models to imitate the output of cla