يمكن أن تستفيد مهام التعلم المختلفة من الوصول إلى معلومات خارجية عن طرائق مختلفة، مثل النص والصور.ركز العمل الحديث على تعلم الهندسة مع ذكريات كبيرة قادرة على تخزين هذه المعرفة.نقترحنا زيادة شبكات عصبية محول التوليد مع وحدات جلب المعلومات المستندة إلى KNN (KIF).تعلم كل وحدة KIF عملية قراءة للوصول إلى المعرفة الخارجية الثابتة.نحن نطبق هذه الوحدات النمطية لنمذجة الحوار المولاد، وهي مهمة صعبة حيث يجب استرجاع المعلومات بشكل مرني وإدماجها للحفاظ على موضوع وتدفق المحادثة.نوضح فعالية نهجنا من خلال تحديد المعرفة ذات الصلة المطلوبة للحوار دراية ولكن المشاركة من Wikipedia، والصور، وإظهار أن الاستفادة من هذه المعلومات المستردة تعمل على تحسين أداء النموذج، ويتم قياسه بواسطة التقييم التلقائي والإنساني.
Various machine learning tasks can benefit from access to external information of different modalities, such as text and images. Recent work has focused on learning architectures with large memories capable of storing this knowledge. We propose augmenting generative Transformer neural networks with KNN-based Information Fetching (KIF) modules. Each KIF module learns a read operation to access fixed external knowledge. We apply these modules to generative dialog modeling, a challenging task where information must be flexibly retrieved and incorporated to maintain the topic and flow of conversation. We demonstrate the effectiveness of our approach by identifying relevant knowledge required for knowledgeable but engaging dialog from Wikipedia, images, and human-written dialog utterances, and show that leveraging this retrieved information improves model performance, measured by automatic and human evaluation.
References used
https://aclanthology.org/
For many tasks, state-of-the-art results have been achieved with Transformer-based architectures, resulting in a paradigmatic shift in practices from the use of task-specific architectures to the fine-tuning of pre-trained language models. The ongoin
Abstract We present a memory-based model for context- dependent semantic parsing. Previous approaches focus on enabling the decoder to copy or modify the parse from the previous utterance, assuming there is a dependency between the current and previo
Following the success of dot-product attention in Transformers, numerous approximations have been recently proposed to address its quadratic complexity with respect to the input length. While these variants are memory and compute efficient, it is not
Considering the importance of building a good Visual Dialog (VD) Questioner, many researchers study the topic under a Q-Bot-A-Bot image-guessing game setting, where the Questioner needs to raise a series of questions to collect information of an undi
Knowledge data are massive and widespread in the real-world, which can serve as good external sources to enrich conversations. However, in knowledge-grounded conversations, current models still lack the fine-grained control over knowledge selection a