ﻻ يوجد ملخص باللغة العربية
Intelligent conversational agents, or chatbots, can take on various identities and are increasingly engaging in more human-centered conversations with persuasive goals. However, little is known about how identities and inquiry strategies influence the conversations effectiveness. We conducted an online study involving 790 participants to be persuaded by a chatbot for charity donation. We designed a two by four factorial experiment (two chatbot identities and four inquiry strategies) where participants were randomly assigned to different conditions. Findings showed that the perceived identity of the chatbot had significant effects on the persuasion outcome (i.e., donation) and interpersonal perceptions (i.e., competence, confidence, warmth, and sincerity). Further, we identified interaction effects among perceived identities and inquiry strategies. We discuss the findings for theoretical and practical implications for developing ethical and effective persuasive chatbots. Our published data, codes, and analyses serve as the first step towards building competent ethical persuasive chatbots.
Conversational agents are a recent trend in human-computer interaction, deployed in multidisciplinary applications to assist the users. In this paper, we introduce Atreya, an interactive bot for chemistry enthusiasts, researchers, and students to stu
This STEM education study investigates the Streamline to Mastery professional development program, in which teachers work in partnership with university researchers to design professional development opportunities for themselves and for fellow teache
Interpreting how persuasive language influences audiences has implications across many domains like advertising, argumentation, and propaganda. Persuasion relies on more than a messages content. Arranging the order of the message itself (i.e., orderi
Outcome-driven studies designed to evaluate potential effects of games and apps designed to promote healthy eating and exercising remain limited either targeting design or usability factors while omitting out health-based outcomes altogether, or tend
We present a dialogue elicitation study to assess how users envision conversations with a perfect voice assistant (VA). In an online survey, N=205 participants were prompted with everyday scenarios, and wrote the lines of both user and VA in dialogue