نماذج العصبية المدربة لتوليد الكلام المقبل في مهمة الحوار تعلم تحاكي تسلسلات N-Gram في التدريب المحدد بأهداف التدريب مثل احتمال السجل السلبي (NLL) أو Cross-Enterpy. هذه الأهداف التدريبية الشائعة الاستخدام لا تعزز تحقيق ردود بديلة إلى سياق. ولكن، فإن آثار التقليل من هدف تدريب بديل يعزز نموذجا لتوليد استجابة بديلة وسجله على التشابه الدلالي لم يتم دراسة جيد. نحن نفترض أن نموذج توليد اللغة يمكن أن يتحسن على تنوعه من خلال التعلم لتوليد نص بديل أثناء التدريب وتقليل الخسارة الدلالية كهدف إضافي. نستكشف هذه الفكرة على مجموعتي بيانات مختلفة الحجم في مهمة توليد الكلام التالي في الحوارات الموجهة نحو الأهداف. نجعل ملاحظتين (1) تقلل من تنوع تحسن من الهدف الدلالي في الردود في مجموعة البيانات الأصغر (الإطارات) ولكن فقط جيدة مثل تقليل NLL في مجموعة البيانات الأكبر (MultiWoz) (2) أكثر فائدة كهدف فقدان الدلالي من كهيئة لمضادات الرمز المميز.
Neural models trained for next utterance generation in dialogue task learn to mimic the n-gram sequences in the training set with training objectives like negative log-likelihood (NLL) or cross-entropy. Such commonly used training objectives do not foster generating alternate responses to a context. But, the effects of minimizing an alternate training objective that fosters a model to generate alternate response and score it on semantic similarity has not been well studied. We hypothesize that a language generation model can improve on its diversity by learning to generate alternate text during training and minimizing a semantic loss as an auxiliary objective. We explore this idea on two different sized data sets on the task of next utterance generation in goal oriented dialogues. We make two observations (1) minimizing a semantic objective improved diversity in responses in the smaller data set (Frames) but only as-good-as minimizing the NLL in the larger data set (MultiWoZ) (2) large language model embeddings can be more useful as a semantic loss objective than as initialization for token embeddings.
References used
https://aclanthology.org/
Predicting the next utterance in dialogue is contingent on encoding of users' input text to generate appropriate and relevant response in data-driven approaches. Although the semantic and syntactic quality of the language generated is evaluated, more
Frame-based state representation is widely used in modern task-oriented dialog systems to model user intentions and slot values. However, a fixed design of domain ontology makes it difficult to extend to new services and APIs. Recent work proposed to
Dialogue systems pretrained with large language models generate locally coherent responses, but lack fine-grained control over responses necessary to achieve specific goals. A promising method to control response generation is exemplar-based generati
Neural machine translation (NMT) models are typically trained using a softmax cross-entropy loss where the softmax distribution is compared against the gold labels. In low-resource scenarios and NMT models tend to perform poorly because the model tra
Neural dialog models are known to suffer from problems such as generating unsafe and inconsistent responses. Even though these problems are crucial and prevalent, they are mostly manually identified by model designers through interactions. Recently,