في هذه الورقة، نقدم أول محلل إحصائي لغوي Lambek الصلب (LCG)، وهي منظماتية نحوية هي طريقة الإثبات الرسومية المعروفة باسم Nets Proof * قابلة للتطبيق.يشتمل محللنا على هيكل صافي مقاوم للقيود والقيود في نظام يعتمد على شبكات الانتباه الذاتي عبر عناصر نموذجية جديدة.تظهر تجاربنا على Corpus LCG English أن دمج هيكل الرسم البياني المصطلح مفيد للنموذج، مما يحسن كل من دقة التحليل والتغطية.علاوة على ذلك، فإننا نستمد وظائف الخسائر الرواية من خلال التعبير عن قيود صافي دليل على أنها وظائف مختلفة لإخراج النماذج لدينا، مما يتيح لنا تدريب محللنا دون اشتقامات في الحقيقة الأرضية.
In this paper, we present the first statistical parser for Lambek categorial grammar (LCG), a grammatical formalism for which the graphical proof method known as *proof nets* is applicable. Our parser incorporates proof net structure and constraints into a system based on self-attention networks via novel model elements. Our experiments on an English LCG corpus show that incorporating term graph structure is helpful to the model, improving both parsing accuracy and coverage. Moreover, we derive novel loss functions by expressing proof net constraints as differentiable functions of our model output, enabling us to train our parser without ground-truth derivations.
References used
https://aclanthology.org/
Most of the previous Rhetorical Structure Theory (RST) parsing methods are based on supervised learning such as neural networks, that require an annotated corpus of sufficient size and quality. However, the RST Discourse Treebank (RST-DT), the benchm
We propose a novel framework to train models to classify acceptability of responses generated by natural language generation (NLG) models, improving upon existing sentence transformation and model-based approaches. An NLG response is considered accep
Chinese character decomposition has been used as a feature to enhance Machine Translation (MT) models, combining radicals into character and word level models. Recent work has investigated ideograph or stroke level embedding. However, questions remai
We describe a span-level supervised attention loss that improves compositional generalization in semantic parsers. Our approach builds on existing losses that encourage attention maps in neural sequence-to-sequence models to imitate the output of cla
Neural approaches to natural language generation in task-oriented dialogue have typically required large amounts of annotated training data to achieve satisfactory performance, especially when generating from compositional inputs. To address this iss