ترغب بنشر مسار تعليمي؟ اضغط هنا

Multi-Modal End-User Programming of Web-Based Virtual Assistant Skills

77   0   0.0 ( 0 )
 نشر من قبل Michael Fischer
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

While Alexa can perform over 100,000 skills on paper, its capability covers only a fraction of what is possible on the web. To reach the full potential of an assistant, it is desirable that individuals can create skills to automate their personal web browsing routines. Many seemingly simple routines, however, such as monitoring COVID-19 stats for their hometown, detecting changes in their childs grades online, or sending personally-addressed messages to a group, cannot be automated without conventional programming concepts such as conditional and iterative evaluation. This paper presents VASH (Voice Assistant Scripting Helper), a new system that empowers users to create useful web-based virtual assistant skills without learning a formal programming language. With VASH, the user demonstrates their task of interest in the browser and issues a few voice commands, such as naming the skills and adding conditions on the action. VASH turns these multi-modal specifications into skills that can be invoked invoice on a virtual assistant. These skills are represented in a formal programming language we designed called WebTalk, which supports parameterization, function invocation, conditionals, and iterative execution. VASH is a fully working prototype that works on the Chrome browser on real-world websites. Our user study shows that users have many web routines they wish to automate, 81% of which can be expressed using VASH. We found that VASH Is easy to learn, and that a majority of the users in our study want to use our system.



قيم البحث

اقرأ أيضاً

The trend towards mobile devices usage has put more than ever the Web as a ubiquitous platform where users perform all kind of tasks. In some cases, users access the Web with native mobile applications developed for well-known sites, such as LinkedIn , Facebook, Twitter, etc. These native applications might offer further (e.g. location-based) functionalities to their users in comparison with their corresponding Web sites, because they were developed with mobile features in mind. However, most Web applications have not this native mobile counterpart and users access them using browsers in the mobile device. Users might eventually want to add mobile features on these Web sites even though those features were not supported originally. In this paper we present a novel approach to allow end users to augment their preferred Web sites with mobile features. This end-user approach is supported by a framework for mobile Web augmentation that we describe in the paper. We also present a set of supporting tools and a validation experiment with end users.
Virtual Reality (VR) enables users to collaborate while exploring scenarios not realizable in the physical world. We propose CollabVR, a distributed multi-user collaboration environment, to explore how digital content improves expression and understa nding of ideas among groups. To achieve this, we designed and examined three possible configurations for participants and shared manipulable objects. In configuration (1), participants stand side-by-side. In (2), participants are positioned across from each other, mirrored face-to-face. In (3), called eyes-free, participants stand side-by-side looking at a shared display, and draw upon a horizontal surface. We also explored a telepathy mode, in which participants could see from each others point of view. We implemented 3DSketch visual objects for participants to manipulate and move between virtual content boards in the environment. To evaluate the system, we conducted a study in which four people at a time used each of the three configurations to cooperate and communicate ideas with each other. We have provided experimental results and interview responses.
In this article we describe Hack.VR, an object-oriented programming game in virtual reality. Hack.VR uses a VR programming language in which nodes represent functions and node connections represent data flow. Using this programming framework, players reprogram VR objects such as elevators, robots, and switches. Hack.VR has been designed to be highly interactable both physically and semantically.
Learning from Demonstration (LfD) provides an intuitive and fast approach to program robotic manipulators. Task parameterized representations allow easy adaptation to new scenes and online observations. However, this approach has been limited to pose -only demonstrations and thus only skills with spatial and temporal features. In this work, we extend the LfD framework to address forceful manipulation skills, which are of great importance for industrial processes such as assembly. For such skills, multi-modal demonstrations including robot end-effector poses, force and torque readings, and operation scene are essential. Our objective is to reproduce such skills reliably according to the demonstrated pose and force profiles within different scenes. The proposed method combines our previous work on task-parameterized optimization and attractor-based impedance control. The learned skill model consists of (i) the attractor model that unifies the pose and force features, and (ii) the stiffness model that optimizes the stiffness for different stages of the skill. Furthermore, an online execution algorithm is proposed to adapt the skill execution to real-time observations of robot poses, measured forces, and changed scenes. We validate this method rigorously on a 7-DoF robot arm over several steps of an E-bike motor assembly process, which require different types of forceful interaction such as insertion, sliding and twisting.
Humanness is core to speech interface design. Yet little is known about how users conceptualise perceptions of humanness and how people define their interaction with speech interfaces through this. To map these perceptions n=21 participants held dial ogues with a human and two speech interface based intelligent personal assistants, and then reflected and compared their experiences using the repertory grid technique. Analysis of the constructs show that perceptions of humanness are multidimensional, focusing on eight key themes: partner knowledge set, interpersonal connection, linguistic content, partner performance and capabilities, conversational interaction, partner identity and role, vocal qualities and behavioral affordances. Through these themes, it is clear that users define the capabilities of speech interfaces differently to humans, seeing them as more formal, fact based, impersonal and less authentic. Based on the findings, we discuss how the themes help to scaffold, categorise and target research and design efforts, considering the appropriateness of emulating humanness.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا