نحن ندرس تحليل عمرو متعدد اللغات من منظور تقطير المعرفة، حيث يكون الهدف هو تعلم وتحسين محلل عمرو متعدد اللغات باستخدام محلل إنجليزي موجود كمعلم لها.نحن تقيد استكشافنا في إعداد صارم متعدد اللغات: هناك نموذج واحد لتحليل جميع اللغات المختلفة بما في ذلك اللغة الإنجليزية.نحدد أن المدخلات الصاخبة والإخراج الدقيق هي مفتاح التقطير الناجح.جنبا إلى جنب مع التدريب المسبق الواسع، نحصل على محلل عمري الذي يتجنب عروضه جميع النتائج التي تم نشرها مسبقا على أربعة لغات أجنبية مختلفة، بما في ذلك الهوامش الألمانية والإسبانية والإيطالية والصينية، بواسطة هوامش كبيرة (تصل إلى 18.8 نقطة برائحة على الصينية وفي المتوسط 11.3نقاط smatch).يحقق محللنا أيضا أداء قابلا للمقارنة على اللغة الإنجليزية إلى أحدث المحللين باللغة الإنجليزية فقط.
We study multilingual AMR parsing from the perspective of knowledge distillation, where the aim is to learn and improve a multilingual AMR parser by using an existing English parser as its teacher. We constrain our exploration in a strict multilingual setting: there is but one model to parse all different languages including English. We identify that noisy input and precise output are the key to successful distillation. Together with extensive pre-training, we obtain an AMR parser whose performances surpass all previously published results on four different foreign languages, including German, Spanish, Italian, and Chinese, by large margins (up to 18.8 Smatch points on Chinese and on average 11.3 Smatch points). Our parser also achieves comparable performance on English to the latest state-of-the-art English-only parser.
References used
https://aclanthology.org/
Knowledge Distillation (KD) is extensively used to compress and deploy large pre-trained language models on edge devices for real-world applications. However, one neglected area of research is the impact of noisy (corrupted) labels on KD. We present,
In this paper we apply self-knowledge distillation to text summarization which we argue can alleviate problems with maximum-likelihood training on single reference and noisy datasets. Instead of relying on one-hot annotation labels, our student summa
Abstract Meaning Representation parsing is a sentence-to-graph prediction task where target nodes are not explicitly aligned to sentence tokens. However, since graph nodes are semantically based on one or more sentence tokens, implicit alignments can
While multilingual pretrained language models (LMs) fine-tuned on a single language have shown substantial cross-lingual task transfer capabilities, there is still a wide performance gap in semantic parsing tasks when target language supervision is a
To reduce a model size but retain performance, we often rely on knowledge distillation (KD) which transfers knowledge from a large teacher'' model to a smaller student'' model. However, KD on multimodal datasets such as vision-language tasks is relat