من المعروف أن نماذج الحوار العصبي تعاني من مشاكل مثل توليد ردود غير آمنة وغير متناسقة.على الرغم من أن هذه المشاكل حاسمة وسائدة، إلا أنها تعد في الغالب يدويا من قبل مصممي النماذج من خلال التفاعلات.في الآونة الأخيرة، يرشد بعض الأبحاث أن الجماعات الجمهورية إلى البوتات الروبوتين في إثارة مثل هذه المشاكل.ومع ذلك، فإن البشر يستفيدون من القرائن السطحية مثل خطاب الكراهية، مع ترك المشاكل المنهجية السرية.في هذه الورقة، نقترح طريقتان بما في ذلك التعزيز التعلم لتشغيل نموذج حوار تلقائيا إلى استجابات إشكالية.نظهر تأثير طرقنا في فضح مشاكل السلامة والتناقض مع نماذج الحوار الحديثة.
Neural dialog models are known to suffer from problems such as generating unsafe and inconsistent responses. Even though these problems are crucial and prevalent, they are mostly manually identified by model designers through interactions. Recently, some research instructs crowdworkers to goad the bots into triggering such problems. However, humans leverage superficial clues such as hate speech, while leaving systematic problems undercover. In this paper, we propose two methods including reinforcement learning to automatically trigger a dialog model into generating problematic responses. We show the effect of our methods in exposing safety and contradiction issues with state-of-the-art dialog models.
References used
https://aclanthology.org/
The problem of designing NLP solvers for math word problems (MWP) has seen sustained research activity and steady gains in the test accuracy. Since existing solvers achieve high performance on the benchmark datasets for elementary level MWPs containi
Identification of lexical borrowings, transfer of words between languages, is an essential practice of historical linguistics and a vital tool in analysis of language contact and cultural events in general. We seek to improve tools for automatic dete
Humans often employ figurative language use in communication, including during interactions with dialog systems. Thus, it is important for real-world dialog systems to be able to handle popular figurative language constructs like metaphor and simile.
Recent progress in language modeling has been driven not only by advances in neural architectures, but also through hardware and optimization improvements. In this paper, we revisit the neural probabilistic language model (NPLM) of Bengio et al. (200
High-performance neural language models have obtained state-of-the-art results on a wide range of Natural Language Processing (NLP) tasks. However, results for common benchmark datasets often do not reflect model reliability and robustness when appli