Do you want to publish a course? Click here

Designing Socially Intelligent Virtual Companions

171   0   0.0 ( 0 )
 Added by Han Yu
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

Virtual companions that interact with users in a socially complex environment require a wide range of social skills. Displaying curiosity is simultaneously a factor to improve a companions believability and to unobtrusively affect the users activities over time. Curiosity represents a drive to know new things. It is a major driving force for engaging learners in active learning. Existing research work pays little attention in curiosity. In this paper, we enrich the social skills of a virtual companion by infusing curiosity into its mental model. A curious companion residing in a Virtual Learning Environment (VLE) to stimulate users curiosity is proposed. The curious companion model is developed based on multidisciplinary considerations. The effectiveness of the curious companion is demonstrated by a preliminary field study.

rate research

Read More

This paper describes how home appliances might be enhanced to improve user awareness of energy usage. Households wish to lead comfortable and manageable lives. Balancing this reasonable desire with the environmental and political goal of reducing electricity usage is a challenge that we claim is best met through the design of interfaces that allows users better control of their usage and unobtrusively informs them of the actions of their peers. A set of design principles along these lines is formulated in this paper. We have built a fully functional prototype home appliance with a socially aware interface to signal the aggregate usage of the users peer group according to these principles, and present the prototype in the paper.
The research of a socially assistive robot has a potential to augment and assist physical therapy sessions for patients with neurological and musculoskeletal problems (e.g. stroke). During a physical therapy session, generating personalized feedback is critical to improve patients engagement. However, prior work on socially assistive robotics for physical therapy has mainly utilized pre-defined corrective feedback even if patients have various physical and functional abilities. This paper presents an interactive approach of a socially assistive robot that can dynamically select kinematic features of assessment on individual patients exercises to predict the quality of motion and provide patient-specific corrective feedback for personalized interaction of a robot exercise coach.
This reflection paper takes the 25th IUI conference milestone as an opportunity to analyse in detail the understanding of intelligence in the community: Despite the focus on intelligent UIs, it has remained elusive what exactly renders an interactive system or user interface intelligent, also in the fields of HCI and AI at large. We follow a bottom-up approach to analyse the emergent meaning of intelligence in the IUI community: In particular, we apply text analysis to extract all occurrences of intelligent in all IUI proceedings. We manually review these with regard to three main questions: 1) What is deemed intelligent? 2) How (else) is it characterised? and 3) What capabilities are attributed to an intelligent entity? We discuss the communitys emerging implicit perspective on characteristics of intelligence in intelligent user interfaces and conclude with ideas for stating ones own understanding of intelligence more explicitly.
Building a socially intelligent agent involves many challenges, one of which is to track the agents mental state transition and teach the agent to make rational decisions guided by its utility like a human. Towards this end, we propose to incorporate a mental state parser and utility model into dialogue agents. The hybrid mental state parser extracts information from both the dialogue and event observations and maintains a graphical representation of the agents mind; Meanwhile, the utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset, Social IQA. Empirical results show that the proposed model attains state-of-the-art performance on the dialogue/action/emotion prediction task in the fantasy text-adventure game dataset, LIGHT. We also show example cases to demonstrate: (textit{i}) how the proposed mental state parser can assist agents decision by grounding on the context like locations and objects, and (textit{ii}) how the utility model can help the agent make reasonable decisions in a dilemma. To the best of our knowledge, we are the first work that builds a socially intelligent agent by incorporating a hybrid mental state parser for both discrete events and continuous dialogues parsing and human-like utility modeling.
225 - Luis Valente 2016
This paper proposes the concept of live-action virtual reality games as a new genre of digital games based on an innovative combination of live-action, mixed-reality, context-awareness, and interaction paradigms that comprise tangible objects, context-aware input devices, and embedded/embodied interactions. Live-action virtual reality games are live-action games because a player physically acts out (using his/her real body and senses) his/her avatar (his/her virtual representation) in the game stage, which is the mixed-reality environment where the game happens. The game stage is a kind of augmented virtuality; a mixed-reality where the virtual world is augmented with real-world information. In live-action virtual reality games, players wear HMD devices and see a virtual world that is constructed using the physical world architecture as the basic geometry and context information. Physical objects that reside in the physical world are also mapped to virtual elements. Live-action virtual reality games keeps the virtual and real-worlds superimposed, requiring players to physically move in the environment and to use different interaction paradigms (such as tangible and embodied interaction) to complete game activities. This setup enables the players to touch physical architectural elements (such as walls) and other objects, feeling the game stage. Players have free movement and may interact with physical objects placed in the game stage, implicitly and explicitly. Live-action virtual reality games differ from similar game concepts because they sense and use contextual information to create unpredictable game experiences, giving rise to emergent gameplay.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا