ﻻ يوجد ملخص باللغة العربية
While Alexa can perform over 100,000 skills on paper, its capability covers only a fraction of what is possible on the web. To reach the full potential of an assistant, it is desirable that individuals can create skills to automate their personal web browsing routines. Many seemingly simple routines, however, such as monitoring COVID-19 stats for their hometown, detecting changes in their childs grades online, or sending personally-addressed messages to a group, cannot be automated without conventional programming concepts such as conditional and iterative evaluation. This paper presents VASH (Voice Assistant Scripting Helper), a new system that empowers users to create useful web-based virtual assistant skills without learning a formal programming language. With VASH, the user demonstrates their task of interest in the browser and issues a few voice commands, such as naming the skills and adding conditions on the action. VASH turns these multi-modal specifications into skills that can be invoked invoice on a virtual assistant. These skills are represented in a formal programming language we designed called WebTalk, which supports parameterization, function invocation, conditionals, and iterative execution. VASH is a fully working prototype that works on the Chrome browser on real-world websites. Our user study shows that users have many web routines they wish to automate, 81% of which can be expressed using VASH. We found that VASH Is easy to learn, and that a majority of the users in our study want to use our system.
The trend towards mobile devices usage has put more than ever the Web as a ubiquitous platform where users perform all kind of tasks. In some cases, users access the Web with native mobile applications developed for well-known sites, such as LinkedIn
Virtual Reality (VR) enables users to collaborate while exploring scenarios not realizable in the physical world. We propose CollabVR, a distributed multi-user collaboration environment, to explore how digital content improves expression and understa
In this article we describe Hack.VR, an object-oriented programming game in virtual reality. Hack.VR uses a VR programming language in which nodes represent functions and node connections represent data flow. Using this programming framework, players
Learning from Demonstration (LfD) provides an intuitive and fast approach to program robotic manipulators. Task parameterized representations allow easy adaptation to new scenes and online observations. However, this approach has been limited to pose
Humanness is core to speech interface design. Yet little is known about how users conceptualise perceptions of humanness and how people define their interaction with speech interfaces through this. To map these perceptions n=21 participants held dial