تختلف عمليات الاختلافات والنهج الواسع النطاق، والتحديات التي تعتمد على النص الموازي.للتعليق على اختلافات الترجمة، نقترح مخططا مؤرجا في تمثيل المعنى التجريدي (AMR)، وهي إطار جلالي على مستوى الجملة مثيل لعدد من اللغات.من خلال مقارنة الرسم البياني الأمريكي الموازي، يمكننا تحديد نقاط مختلفة من الاختلاف.يتم تصنيف كل اختلاف مع كل من النوع والسبب.نطلق سراح كائن صغير من البيانات الإنجليزية الإسبانية المشروح وتحليل التعليقات التوضيحية في Corpus.
Translation divergences are varied and widespread, challenging approaches that rely on parallel text. To annotate translation divergences, we propose a schema grounded in the Abstract Meaning Representation (AMR), a sentence-level semantic framework instantiated for a number of languages. By comparing parallel AMR graphs, we can identify specific points of divergence. Each divergence is labeled with both a type and a cause. We release a small corpus of annotated English-Spanish data, and analyze the annotations in our corpus.
References used
https://aclanthology.org/
In cross-lingual Abstract Meaning Representation (AMR) parsing, researchers develop models that project sentences from various languages onto their AMRs to capture their essential semantic structures: given a sentence in any language, we aim to captu
Transformers that are pre-trained on multilingual corpora, such as, mBERT and XLM-RoBERTa, have achieved impressive cross-lingual transfer capabilities. In the zero-shot transfer setting, only English training data is used, and the fine-tuned model i
Social media is notoriously difficult to process for existing natural language processing tools, because of spelling errors, non-standard words, shortenings, non-standard capitalization and punctuation. One method to circumvent these issues is to nor
It is now established that modern neural language models can be successfully trained on multiple languages simultaneously without changes to the underlying architecture, providing an easy way to adapt a variety of NLP models to low-resource languages
We study multilingual AMR parsing from the perspective of knowledge distillation, where the aim is to learn and improve a multilingual AMR parser by using an existing English parser as its teacher. We constrain our exploration in a strict multilingua