في حين أن تقنيات التبغيات المتبقية تجد نجاحا متزايدا في مجموعة واسعة من مهام معالجة اللغة الطبيعية، فإن تطبيقها على الدورات الدلالية (SRL) كان محدودا بقوة من خلال حقيقة أن كل لغة تعتمد شكليها اللغوي الخاص بها، من Propbank من أجل أنظمة إنجليكزي للإسبانية و PDT-VALLEX لتشيك، في جملة أمور. في هذا العمل، نتعلم هذه المشكلة وتقديم نموذج موحد لأداء SRL عبر اللغات عبر الموارد اللغوية غير المتجانسة. يتعلم نموذجنا ضمنيا تعيين عالي الجودة من أجل الشكليات المختلفة عبر لغات متنوعة دون اللجوء إلى محاذاة Word وتقنيات الترجمة. نجد ذلك، ليس فقط نظامنا المتبادل لدينا تنافس مع الحالة الحالية للفن ولكنها قوية أيضا على سيناريوهات البيانات المنخفضة. من المثير للاهتمام، من المثير للاهتمام، نموذجنا الموحد قادر على التعليق الجملة في تمريرة واحدة إلى الأمام مع جميع المخزونات التي تم تدريبها عليها، وتوفير أداة لتحليل ومقارنة النظريات اللغوية عبر لغات مختلفة. نطلق سردنا ونموذجنا في https://github.com/sapienzanlp/unify-srl.
While cross-lingual techniques are finding increasing success in a wide range of Natural Language Processing tasks, their application to Semantic Role Labeling (SRL) has been strongly limited by the fact that each language adopts its own linguistic formalism, from PropBank for English to AnCora for Spanish and PDT-Vallex for Czech, inter alia. In this work, we address this issue and present a unified model to perform cross-lingual SRL over heterogeneous linguistic resources. Our model implicitly learns a high-quality mapping for different formalisms across diverse languages without resorting to word alignment and/or translation techniques. We find that, not only is our cross-lingual system competitive with the current state of the art but that it is also robust to low-data scenarios. Most interestingly, our unified model is able to annotate a sentence in a single forward pass with all the inventories it was trained with, providing a tool for the analysis and comparison of linguistic theories across different languages. We release our code and model at https://github.com/SapienzaNLP/unify-srl.
References used
https://aclanthology.org/
Graph-based semantic parsing aims to represent textual meaning through directed graphs. As one of the most promising general-purpose meaning representations, these structures and their parsing have gained a significant interest momentum during recent
In cross-lingual text classification, it is required that task-specific training data in high-resource source languages are available, where the task is identical to that of a low-resource target language. However, collecting such training data can b
Semantic textual similarity (STS) systems estimate the degree of the meaning similarity between two sentences. Cross-lingual STS systems estimate the degree of the meaning similarity between two sentences, each in a different language. State-of-the-a
Although recent developments in neural architectures and pre-trained representations have greatly increased state-of-the-art model performance on fully-supervised semantic role labeling (SRL), the task remains challenging for languages where supervis
Conversational semantic role labeling (CSRL) is believed to be a crucial step towards dialogue understanding. However, it remains a major challenge for existing CSRL parser to handle conversational structural information. In this paper, we present a