الهدف الشامل من معالجة اللغة الطبيعية هو تمكين الآلات من التواصل بسلاسة مع البشر.ومع ذلك، يمكن أن تكون اللغة الطبيعية غامضة أو غير واضحة.في حالات عدم اليقين، يشارك البشر في عملية تفاعلية تعرف باسم الإصلاح: طرح الأسئلة والسعي للحصول على توضيح حتى يتم حل حالة عدم اليقين.نقترح إطارا لبناء نموذج لسؤال أسئلة بصريا قادرة على إنتاج أسئلة توضيحات القطبية (نعم لا) لحل سوء الفهم في الحوار.يستخدم نموذجنا هدف معلومات متوقعة اكتبا لصالح أسئلة مفيدة من Captioner صورة خارج الرف دون الحاجة إلى أي بيانات للإجابة على الأسئلة الخاضعة للإشراف.نوضح قدرة النموذج لدينا على طرح الأسئلة التي تحسن النجاح التواصل في لعبة 20 أسئلة موجهة نحو الأهداف مع الإجابات الاصطناعية والإنسانية.
An overarching goal of natural language processing is to enable machines to communicate seamlessly with humans. However, natural language can be ambiguous or unclear. In cases of uncertainty, humans engage in an interactive process known as repair: asking questions and seeking clarification until their uncertainty is resolved. We propose a framework for building a visually grounded question-asking model capable of producing polar (yes-no) clarification questions to resolve misunderstandings in dialogue. Our model uses an expected information gain objective to derive informative questions from an off-the-shelf image captioner without requiring any supervised question-answer data. We demonstrate our model's ability to pose questions that improve communicative success in a goal-oriented 20 questions game with synthetic and human answerers.
References used
https://aclanthology.org/
The ability to generate clarification questions i.e., questions that identify useful missing information in a given context, is important in reducing ambiguity. Humans use previous experience with similar contexts to form a global view and compare it
Open-domain question answering aims at locating the answers to user-generated questions in massive collections of documents. Retriever-readers and knowledge graph approaches are two big families of solutions to this task. A retriever-reader first app
Information seeking is an essential step for open-domain question answering to efficiently gather evidence from a large corpus. Recently, iterative approaches have been proven to be effective for complex questions, by recursively retrieving new evide
We introduce a new dataset for Question Rewriting in Conversational Context (QReCC), which contains 14K conversations with 80K question-answer pairs. The task in QReCC is to find answers to conversational questions within a collection of 10M web page
Multilingual question answering tasks typically assume that answers exist in the same language as the question. Yet in practice, many languages face both information scarcity---where languages have few reference articles---and information asymmetry--