عند بناء أنظمة الترجمة الآلات، يحتاج المرء في كثير من الأحيان إلى الاستفادة القصوى من مجموعات غير متجانسة من البيانات الموازية في التدريب، والتعامل مع المدخلات بقوة من المجالات غير المتوقعة في الاختبار.جذبت هذا السيناريو متعدد المجالات الكثير من العمل الحديث الذي يقع تحت المظلة العامة لتعلم النقل.في هذه الدراسة، نشجع الترجمة متعددة المجالات، بهدف صياغة الدوافع لتطوير هذه الأنظمة والتوقعات المرتبطة فيما يتعلق بالأداء.تبين تجاربنا مع عينة كبيرة من أنظمة المجال متعددة أن معظم هذه التوقعات تلتقي بالكاد وتشير إلى أن هناك حاجة إلى مزيد من العمل لتحليل السلوك الحالي لأنظمة المجالات المتعددة وجعلها تمسك بوعودها بالكامل.
When building machine translation systems, one often needs to make the best out of heterogeneous sets of parallel data in training, and to robustly handle inputs from unexpected domains in testing. This multi-domain scenario has attracted a lot of recent work that fall under the general umbrella of transfer learning. In this study, we revisit multi-domain machine translation, with the aim to formulate the motivations for developing such systems and the associated expectations with respect to performance. Our experiments with a large sample of multi-domain systems show that most of these expectations are hardly met and suggest that further work is needed to better analyze the current behaviour of multi-domain systems and to make them fully hold their promises.
References used
https://aclanthology.org/
Multilingual neural machine translation (MNMT) learns to translate multiple language pairs with a single model, potentially improving both the accuracy and the memory-efficiency of deployed models. However, the heavy data imbalance between languages
Production NMT systems typically need to serve niche domains that are not covered by adequately large and readily available parallel corpora. As a result, practitioners often fine-tune general purpose models to each of the domains their organisation
Policy gradient algorithms have found wide adoption in NLP, but have recently become subject to criticism, doubting their suitability for NMT. Choshen et al. (2020) identify multiple weaknesses and suspect that their success is determined by the shap
Training data for machine translation (MT) is often sourced from a multitude of large corpora that are multi-faceted in nature, e.g. containing contents from multiple domains or different levels of quality or complexity. Naturally, these facets do no
Question answering (QA) is one of the most challenging and impactful tasks in natural language processing. Most research in QA, however, has focused on the open-domain or monolingual setting while most real-world applications deal with specific domai