إن القدرة على توليد أسئلة باللغة الطبيعية مع مستويات التعقيد التي تسيطر عليها مرغوب فيه للغاية لأنها توزع تطبيق تطبيق سؤال. في هذه الورقة، نقترح نموذجا من جيلات السؤال العصبي المرتعل من نهاية إلى نهج، مما يشتمل على مزيج من الخبراء (MOE) كمحدد قوالب ناعمة لتحسين دقة مراقبة التعقيد ونوعية الأسئلة التي تم إنشاؤها. القوالب الناعمة تلتقط التشابه السؤال مع تجنب البناء باهظ الثمن للقوالب الفعلية. تقدم طريقتنا رواية ومقدر تعقيد عبر المجال لتقييم تعقيد سؤال، مع مراعاة المقطع والسؤال والإجابة وتفاعلاتها. تظهر النتائج التجريبية على مجموعات بيانات QA القياسية على أن نموذج QG الخاص بنا متفوقا على الأساليب الحديثة في كل من التقييم التلقائي واليدوي. علاوة على ذلك، فإن مقدر التعقيد لدينا أكثر دقة بكثير من خطوط الأساس في كلا من إعدادات المجال والخروج.
The ability to generate natural-language questions with controlled complexity levels is highly desirable as it further expands the applicability of question generation. In this paper, we propose an end-to-end neural complexity-controllable question generation model, which incorporates a mixture of experts (MoE) as the selector of soft templates to improve the accuracy of complexity control and the quality of generated questions. The soft templates capture question similarity while avoiding the expensive construction of actual templates. Our method introduces a novel, cross-domain complexity estimator to assess the complexity of a question, taking into account the passage, the question, the answer and their interactions. The experimental results on two benchmark QA datasets demonstrate that our QG model is superior to state-of-the-art methods in both automatic and manual evaluation. Moreover, our complexity estimator is significantly more accurate than the baselines in both in-domain and out-domain settings.
References used
https://aclanthology.org/
Text Simplification improves the readability of sentences through several rewriting transformations, such as lexical paraphrasing, deletion, and splitting. Current simplification systems are predominantly sequence-to-sequence models that are trained
Simultaneous machine translation (SiMT) generates translation before reading the entire source sentence and hence it has to trade off between translation quality and latency. To fulfill the requirements of different translation quality and latency in
In this paper, we propose a controllable neural generation framework that can flexibly guide dialogue summarization with personal named entity planning. The conditional sequences are modulated to decide what types of information or what perspective t
Recent work on opinion summarization produces general summaries based on a set of input reviews and the popularity of opinions expressed in them. In this paper, we propose an approach that allows the generation of customized summaries based on aspect
In practical applications of semantic parsing, we often want to rapidly change the behavior of the parser, such as enabling it to handle queries in a new domain, or changing its predictions on certain targeted queries. While we can introduce new trai