No Arabic abstract
Web pages contain a large variety of information, but are largely designed for use by graphical web browsers. Mobile access to web-based information often requires presenting HTML web pages using channels that are limited in their graphical capabilities such as small-screens or audio-only interfaces. Content transcoding and annotations have been explored as methods for intelligently presenting HTML documents. Much of this work has focused on transcoding for small-screen devices such as are found on PDAs and cell phones. Here, we focus on the use of annotations and transcoding for presenting HTML content through a voice user interface instantiated in VoiceXML. This transcoded voice interface is designed with an assumption that it will not be used for extended web browsing by voice, but rather to quickly gain directed access to information on web pages. We have found repeated structures that are common in the presentation of data on web pages that are well suited for voice presentation and navigation. In this paper, we describe these structures and their use in an annotation system we have implemented that produces a VoiceXML interface to information originally embedded in HTML documents. We describe the transcoding process used to translate HTML into VoiceXML, including transcoding features we have designed to lead to highly usable VoiceXML code.
A critical goal, is that organizations and citizens can easily access the geographic information required for good governance. However, despite the costly efforts of governments to create and implement Spatial Data Infrastructures (SDIs), this goal is far from being achieved. This is partly due to the lack of usability of the geoportals through which the geographic information is accessed. In this position paper, we present IDEAIS, a research network composed of multiple Ibero-American partners to address this usability issue through the use of Intelligent Systems, in particular Smart Voice Assistants, to efficiently recover and access geographic information.
Augmented reality (AR) is an emerging technology in mobile app design during recent years. However, usability challenges in these apps are prominent. There are currently no established guidelines for designing and evaluating interactions in AR as there are in traditional user interfaces. In this work, we aimed to examine the usability of current mobile AR applications and interpreting classic usability heuristics in the context of mobile AR. Particularly, we focused on AR home design apps because of their popularity and ability to incorporate important mobile AR interaction schemas. Our findings indicated that it is important for the designers to consider the unfamiliarity of AR technology to the vast users and to take technological limitations into consideration when designing mobile AR apps. Our work serves as a first step for establishing more general heuristics and guidelines for mobile AR.
Human ratings have become a crucial resource for training and evaluating machine learning systems. However, traditional elicitation methods for absolute and comparative rating suffer from issues with consistency and often do not distinguish between uncertainty due to disagreement between annotators and ambiguity inherent to the item being rated. In this work, we present Goldilocks, a novel crowd rating elicitation technique for collecting calibrated scalar annotations that also distinguishes inherent ambiguity from inter-annotator disagreement. We introduce two main ideas: grounding absolute rating scales with examples and using a two-step bounding process to establish a range for an items placement. We test our designs in three domains: judging toxicity of online comments, estimating satiety of food depicted in images, and estimating age based on portraits. We show that (1) Goldilocks can improve consistency in domains where interpretation of the scale is not universal, and that (2) representing items with ranges lets us simultaneously capture different sources of uncertainty leading to better estimates of pairwise relationship distributions.
Software analytics in augmented reality (AR) is said to have great potential. One reason why this potential is not yet fully exploited may be usability problems of the AR user interfaces. We present an iterative and qualitative usability evaluation with 15 subjects of a state-of-the-art application for software analytics in AR. We could identify and resolve numerous usability issues. Most of them were caused by applying conventional user interface elements, such as dialog windows, buttons, and scrollbars. The used city visualization, however, did not cause any usability issues. Therefore, we argue that future work should focus on making conventional user interface elements in AR obsolete by integrating their functionality into the immersive visualization.
Voice assistants have recently achieved remarkable commercial success. However, the current generation of these devices is typically capable of only reactive interactions. In other words, interactions have to be initiated by the user, which somewhat limits their usability and user experience. We propose, that the next generation of such devices should be able to proactively provide the right information in the right way at the right time, without being prompted by the user. However, achieving this is not straightforward, since there is the danger it could interrupt what the user is doing too much, resulting in it being distracting or even annoying. Furthermore, it could unwittingly, reveal sensitive/private information to third parties. In this report, we discuss the challenges of developing proactively initiated interactions, and suggest a framework for when it is appropriate for the device to intervene. To validate our design assumptions, we describe firstly, how we built a functioning prototype and secondly, a user study that was conducted to assess users reactions and reflections when in the presence of a proactive voice assistant. This pre-print summarises the state, ideas and progress towards a proactive device as of autumn 2018.