الإجابة السؤالية (QA) هي واحدة من أكثر المهام التحدي والآثار في معالجة اللغة الطبيعية.ومع ذلك، ركزت معظم الأبحاث في ضمان الجودة على النطاق المفتوح أو الأبدية في حين أن معظم تطبيقات العالم الواقعي تعامل مع مجالات أو لغات محددة.في هذا البرنامج التعليمي، نحاول سد هذه الفجوة.أولا، نقدم معايير قياسية في مجال QA متعدد اللغات متعددة اللغات.في كل من السيناريوهين، نناقش النهج الحديثة التي تحقق أداء مثير للإعجاب، تتراوح من التعلم من تحويل صفرية إلى التدريب خارج الصندوق مع أنظمة QA المجال المفتوحة.أخيرا، سنقدم مشاكل بحثية مفتوحة أن أجندة الأبحاث الجديدة تشكل مثل التعلم متعدد المهام، وتعلم التحويل عبر اللغات، وتكييف المجال وتدريب نماذج لغة متعددة اللغات المدربة مسبقا مسبقا.
Question answering (QA) is one of the most challenging and impactful tasks in natural language processing. Most research in QA, however, has focused on the open-domain or monolingual setting while most real-world applications deal with specific domains or languages. In this tutorial, we attempt to bridge this gap. Firstly, we introduce standard benchmarks in multi-domain and multilingual QA. In both scenarios, we discuss state-of-the-art approaches that achieve impressive performance, ranging from zero-shot transfer learning to out-of-the-box training with open-domain QA systems. Finally, we will present open research problems that this new research agenda poses such as multi-task learning, cross-lingual transfer learning, domain adaptation and training large scale pre-trained multilingual language models.
References used
https://aclanthology.org/
When building machine translation systems, one often needs to make the best out of heterogeneous sets of parallel data in training, and to robustly handle inputs from unexpected domains in testing. This multi-domain scenario has attracted a lot of re
India is known as the land of many tongues and dialects. Neural machine translation (NMT) is the current state-of-the-art approach for machine translation (MT) but performs better only with large datasets which Indian languages usually lack, making t
Building automatic technical support system is an important yet challenge task. Conceptually, to answer a user question on a technical forum, a human expert has to first retrieve relevant documents, and then read them carefully to identify the answer
Current textual question answering (QA) models achieve strong performance on in-domain test sets, but often do so by fitting surface-level patterns, so they fail to generalize to out-of-distribution settings. To make a more robust and understandable
Video Question Answering (VidQA) evaluation metrics have been limited to a single-word answer or selecting a phrase from a fixed set of phrases. These metrics limit the VidQA models' application scenario. In this work, we leverage semantic roles deri