تم تجاهل المعلومات النحوية والدلية الخارجية إلى حد كبير من قبل نماذج حل النواة العصبية الحالية.في هذه الورقة، نقدم نموذجا مقرا له من الرسوم البيانية غير متجانسة لإدماج الهياكل النحوية والدلالية للجمل.يحتوي الرسم البياني المقترح على رسم بياني فرعي سنوي حيث يتم توصيل الرموز الرائعة بناء على شجرة التبعية، ورسم إلكتروني فرعي دلالي يحتوي على حجج ويستند كمستلزمات دورا دالايا كحواف.من خلال تطبيق شبكة انتباه الرسوم البيانية، يمكننا الحصول على تمثيل كلمة معدنية من الناحية النحوية وغير المعزز، والتي يمكن دمجها باستخدام طبقة تكامل اليقظة وآلية Gating.تجارب في OnTonotes 5.0 معيار المعيار إظهار فعالية نموذجنا المقترح.
External syntactic and semantic information has been largely ignored by existing neural coreference resolution models. In this paper, we present a heterogeneous graph-based model to incorporate syntactic and semantic structures of sentences. The proposed graph contains a syntactic sub-graph where tokens are connected based on a dependency tree, and a semantic sub-graph that contains arguments and predicates as nodes and semantic role labels as edges. By applying a graph attention network, we can obtain syntactically and semantically augmented word representation, which can be integrated using an attentive integration layer and gating mechanism. Experiments on the OntoNotes 5.0 benchmark show the effectiveness of our proposed model.
References used
https://aclanthology.org/
Existing sarcasm detection systems focus on exploiting linguistic markers, context, or user-level priors. However, social studies suggest that the relationship between the author and the audience can be equally relevant for the sarcasm usage and inte
Recent progress in pretrained Transformer-based language models has shown great success in learning contextual representation of text. However, due to the quadratic self-attention complexity, most of the pretrained Transformers models can only handle
Recently, disentanglement based on a generative adversarial network or a variational autoencoder has significantly advanced the performance of diverse applications in CV and NLP domains. Nevertheless, those models still work on coarse levels in the d
Multi-label document classification, associating one document instance with a set of relevant labels, is attracting more and more research attention. Existing methods explore the incorporation of information beyond text, such as document metadata or
The encoder--decoder framework achieves state-of-the-art results in keyphrase generation (KG) tasks by predicting both present keyphrases that appear in the source document and absent keyphrases that do not. However, relying solely on the source docu