تفتقر إلى البيانات المشروحة غير المشروح بين الإنسان هي تحدي رئيسي واحد لتحليل تمثيل المعنى التجريدي (AMR). لتخفيف هذه المشكلة، عادة ما تستخدم الأعمال السابقة البيانات الفضية أو نماذج اللغة المدربة مسبقا. على وجه الخصوص. ومع ذلك، فإنه يجعل فك تشفير أبطأ نسبيا. في هذا العمل، نحقق مناهج بديلة لتحقيق أداء تنافسي بسرعات أسرع. نقترح محلل عمرو المبسط وتقنية تدريب مسبقة الاستخدام للاستخدام الفعال للبيانات الفضية. نقوم بإجراء تجارب مكثفة على مجموعة بيانات AMR2.0 المستخدمة على نطاق واسع وتظهرت النتائج أن محلل عمرو المحولات لدينا يحقق أفضل أداء بين النماذج المستندة إلى SEQ2Graph. علاوة على ذلك، مع البيانات الفضية، يحقق نموذجنا نتائج تنافسية مع نموذج SOTA، والسرعة هي أمر ذو حجم أسرع. تتم التحليلات التفصيلية للحصول على المزيد من الأفكار في نموذجنا المقترح وفعالية تقنية التدريب المسبق.
Lacking sufficient human-annotated data is one main challenge for abstract meaning representation (AMR) parsing. To alleviate this problem, previous works usually make use of silver data or pre-trained language models. In particular, one recent seq-to-seq work directly fine-tunes AMR graph sequences on the encoder-decoder pre-trained language model and achieves new state-of-the-art results, outperforming previous works by a large margin. However, it makes the decoding relatively slower. In this work, we investigate alternative approaches to achieve competitive performance at faster speeds. We propose a simplified AMR parser and a pre-training technique for the effective usage of silver data. We conduct extensive experiments on the widely used AMR2.0 dataset and the results demonstrate that our Transformer-based AMR parser achieves the best performance among the seq2graph-based models. Furthermore, with silver data, our model achieves competitive results with the SOTA model, and the speed is an order of magnitude faster. Detailed analyses are conducted to gain more insights into our proposed model and the effectiveness of the pre-training technique.
References used
https://aclanthology.org/
Abstract Meaning Representation parsing is a sentence-to-graph prediction task where target nodes are not explicitly aligned to sentence tokens. However, since graph nodes are semantically based on one or more sentence tokens, implicit alignments can
Reference-based automatic evaluation metrics are notoriously limited for NLG due to their inability to fully capture the range of possible outputs. We examine a referenceless alternative: evaluating the adequacy of English sentences generated from Ab
We study multilingual AMR parsing from the perspective of knowledge distillation, where the aim is to learn and improve a multilingual AMR parser by using an existing English parser as its teacher. We constrain our exploration in a strict multilingua
In cross-lingual Abstract Meaning Representation (AMR) parsing, researchers develop models that project sentences from various languages onto their AMRs to capture their essential semantic structures: given a sentence in any language, we aim to captu
AMR (Abstract Meaning Representation) and EDS (Elementary Dependency Structures) are two popular meaning representations in NLP/NLU. AMR is more abstract and conceptual, while EDS is more low level, closer to the lexical structures of the given sente