تتطلب قراءة آلات المحادثة (CMR) آلات تواصل مع البشر من خلال التفاعلات متعددة الدورات بين دولتي الحوار البارز في عمليات صنع القرار وعمليات توليد الأسئلة.في إعدادات CMR المفتوحة، كسيناريو أكثر واقعية، ستكون المعرفة الخلفية المستردة صاخبة، مما يؤدي إلى تحديات شديدة في نقل المعلومات.الدراسات الموجودة تدرب عادة أنظمة مستقلة أو خطوط الأنابيب للمشاركة.ومع ذلك، فإن هذه الطرق تافهة باستخدام قرارات تسمية ثابتة لتنشيط جيل السؤال، مما يعيق أداء النموذج في النهاية.في هذا العمل، نقترح استراتيجية فعالة للجزر من خلال تعويض دول الحوار في وحدة فك ترميز واحدة فقط وصنع قرار الجسر وتوليد الأسئلة لتوفير إشارة لولاية حوار أكثر ثراء.تظهر التجارب على DataSet أو Sharc فعالية طريقتنا، والتي تحقق نتائج جديدة من أحدث النتائج.
Conversational machine reading (CMR) requires machines to communicate with humans through multi-turn interactions between two salient dialogue states of decision making and question generation processes. In open CMR settings, as the more realistic scenario, the retrieved background knowledge would be noisy, which results in severe challenges in the information transmission. Existing studies commonly train independent or pipeline systems for the two subtasks. However, those methods are trivial by using hard-label decisions to activate question generation, which eventually hinders the model performance. In this work, we propose an effective gating strategy by smoothing the two dialogue states in only one decoder and bridge decision making and question generation to provide a richer dialogue state reference. Experiments on the OR-ShARC dataset show the effectiveness of our method, which achieves new state-of-the-art results.
References used
https://aclanthology.org/
Conversational agents trained on large unlabeled corpora of human interactions will learn patterns and mimic behaviors therein, which include offensive or otherwise toxic behavior. We introduce a new human-and-model-in-the-loop framework for evaluati
Identifying relevant knowledge to be used in conversational systems that are grounded in long documents is critical to effective response generation. We introduce a knowledge identification model that leverages the document structure to provide dialo
Abstractive conversation summarization has received growing attention while most current state-of-the-art summarization models heavily rely on human-annotated summaries. To reduce the dependence on labeled summaries, in this work, we present a simple
Conversational semantic role labeling (CSRL) is believed to be a crucial step towards dialogue understanding. However, it remains a major challenge for existing CSRL parser to handle conversational structural information. In this paper, we present a
The Reading Machine, is a parsing framework that takes as input raw text and performs six standard nlp tasks: tokenization, pos tagging, morphological analysis, lemmatization, dependency parsing and sentence segmentation. It is built upon Transition