تعرف نوعية وكمية الجمل الموازية كبيانات تدريبية مهمة للغاية لبناء أنظمة الترجمة الآلية العصبية (NMT).ومع ذلك، فإن هذه الموارد غير متوفرة للعديد من أزواج لغة الموارد المنخفضة.تحتاج العديد من الطرق الحالية إلى إشراف قوي غير مناسب.على الرغم من أن عدة محاولات في تطوير نماذج غير مدفوعة، إلا أنها تتجاهل اللغة الثابتة بين اللغات.في هذه الورقة، نقترح نهجا يستند إلى التعلم عن الجمل الموازية المتعلقة بالألغام في الإعداد غير المنسق. مع مساعدة من أزواج اللغة الثنائية الثنائية من الأغنياء، يمكننا الجمل الموازية دون إشراف ثنائي اللغة أزواج لغة منخفضة الموارد.تظهر التجارب أن نهجنا يحسن أداء الجمل الموازية الملغومة مقارنة بالطرق السابقة.على وجه الخصوص، نحقق نتائج ممتازة في اثنين من أزواج لغة الموارد المنخفضة في العالم الحقيقي.
The quality and quantity of parallel sentences are known as very important training data for constructing neural machine translation (NMT) systems. However, these resources are not available for many low-resource language pairs. Many existing methods need strong supervision are not suitable. Although several attempts at developing unsupervised models, they ignore the language-invariant between languages. In this paper, we propose an approach based on transfer learning to mine parallel sentences in the unsupervised setting.With the help of bilingual corpora of rich-resource language pairs, we can mine parallel sentences without bilingual supervision of low-resource language pairs. Experiments show that our approach improves the performance of mined parallel sentences compared with previous methods. In particular, we achieve excellent results at two real-world low-resource language pairs.
References used
https://aclanthology.org/
In this paper, we address unsupervised chunking as a new task of syntactic structure induction, which is helpful for understanding the linguistic structures of human languages as well as processing low-resource languages. We propose a knowledge-trans
Previous works on syntactically controlled paraphrase generation heavily rely on large-scale parallel paraphrase data that is not easily available for many languages and domains. In this paper, we take this research direction to the extreme and inves
The style transfer task (here style is used in a broad authorial'' sense with many aspects including register, sentence structure, and vocabulary choice) takes text input and rewrites it in a specified target style preserving the meaning, but alterin
Unsupervised style transfer models are mainly based on an inductive learning approach, which represents the style as embeddings, decoder parameters, or discriminator parameters and directly applies these general rules to the test cases. However, the
Paraphrase generation has benefited extensively from recent progress in the designing of training objectives and model architectures. However, previous explorations have largely focused on supervised methods, which require a large amount of labeled d