Do you want to publish a course? Click here

MobileVisFixer: Tailoring Web Visualizations for Mobile Phones Leveraging an Explainable Reinforcement Learning Framework

85   0   0.0 ( 0 )
 Added by Aoyu Wu
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We contribute MobileVisFixer, a new method to make visualizations more mobile-friendly. Although mobile devices have become the primary means of accessing information on the web, many existing visualizations are not optimized for small screens and can lead to a frustrating user experience. Currently, practitioners and researchers have to engage in a tedious and time-consuming process to ensure that their designs scale to screens of different sizes, and existing toolkits and libraries provide little support in diagnosing and repairing issues. To address this challenge, MobileVisFixer automates a mobile-friendly visualization re-design process with a novel reinforcement learning framework. To inform the design of MobileVisFixer, we first collected and analyzed SVG-based visualizations on the web, and identified five common mobile-friendly issues. MobileVisFixer addresses four of these issues on single-view Cartesian visualizations with linear or discrete scales by a Markov Decision Process model that is both generalizable across various visualizations and fully explainable. MobileVisFixer deconstructs charts into declarative formats, and uses a greedy heuristic based on Policy Gradient methods to find solutions to this difficult, multi-criteria optimization problem in reasonable time. In addition, MobileVisFixer can be easily extended with the incorporation of optimization algorithms for data visualizations. Quantitative evaluation on two real-world datasets demonstrates the effectiveness and generalizability of our method.



rate research

Read More

The trend towards mobile devices usage has put more than ever the Web as a ubiquitous platform where users perform all kind of tasks. In some cases, users access the Web with native mobile applications developed for well-known sites, such as LinkedIn, Facebook, Twitter, etc. These native applications might offer further (e.g. location-based) functionalities to their users in comparison with their corresponding Web sites, because they were developed with mobile features in mind. However, most Web applications have not this native mobile counterpart and users access them using browsers in the mobile device. Users might eventually want to add mobile features on these Web sites even though those features were not supported originally. In this paper we present a novel approach to allow end users to augment their preferred Web sites with mobile features. This end-user approach is supported by a framework for mobile Web augmentation that we describe in the paper. We also present a set of supporting tools and a validation experiment with end users.
144 - Ingmar Steiner 2012
We present a modular framework for articulatory animation synthesis using speech motion capture data obtained with electromagnetic articulography (EMA). Adapting a skeletal animation approach, the articulatory motion data is applied to a three-dimensional (3D) model of the vocal tract, creating a portable resource that can be integrated in an audiovisual (AV) speech synthesis platform to provide realistic animation of the tongue and teeth for a virtual character. The framework also provides an interface to articulatory animation synthesis, as well as an example application to illustrate its use with a 3D game engine. We rely on cross-platform, open-source software and open standards to provide a lightweight, accessible, and portable workflow.
We present RealitySketch, an augmented reality interface for sketching interactive graphics and visualizations. In recent years, an increasing number of AR sketching tools enable users to draw and embed sketches in the real world. However, with the current tools, sketched contents are inherently static, floating in mid air without responding to the real world. This paper introduces a new way to embed dynamic and responsive graphics in the real world. In RealitySketch, the user draws graphical elements on a mobile AR screen and binds them with physical objects in real-time and improvisational ways, so that the sketched elements dynamically move with the corresponding physical motion. The user can also quickly visualize and analyze real-world phenomena through responsive graph plots or interactive visualizations. This paper contributes to a set of interaction techniques that enable capturing, parameterizing, and visualizing real-world motion without pre-defined programs and configurations. Finally, we demonstrate our tool with several application scenarios, including physics education, sports training, and in-situ tangible interfaces.
71 - Gregoire Cattan 2020
A brain-computer interface (BCI) based on electroencephalography (EEG) is a promising technology for enhancing virtual reality (VR) applications-in particular, for gaming. We focus on the so-called P300-BCI, a stable and accurate BCI paradigm relying on the recognition of a positive event-related potential (ERP) occurring in the EEG about 300 ms post-stimulation. We implemented a basic version of such a BCI displayed on an ordinary and affordable smartphone-based head-mounted VR device: that is, a mobile and passive VR system (with no electronic components beyond the smartphone). The mobile phone performed the stimuli presentation, EEG synchronization (tagging) and feedback display. We compared the ERPs and the accuracy of the BCI on the VR device with a traditional BCI running on a personal computer (PC). We also evaluated the impact of subjective factors on the accuracy. The study was within-subjects, with 21 participants and one session in each modality. No significant difference in BCI accuracy was found between the PC and VR systems, although the P200 ERP was significantly wider and larger in the VR system as compared to the PC system.
86 - Eytan Adar , Elsie Lee 2020
Significant research has provided robust task and evaluation languages for the analysis of exploratory visualizations. Unfortunately, these taxonomies fail when applied to communicative visualizations. Instead, designers often resort to evaluating communicative visualizations from the cognitive efficiency perspective: can the recipient accurately decode my message/insight? However, designers are unlikely to be satisfied if the message went in one ear and out the other. The consequence of this inconsistency is that it is difficult to design or select between competing options in a principled way. The problem we address is the fundamental mismatch between how designers want to describe their intent, and the language they have. We argue that visualization designers can address this limitation through a learning lens: that the recipient is a student and the designer a teacher. By using learning objectives, designers can better define, assess, and compare communicative visualizations. We illustrate how the learning-based approach provides a framework for understanding a wide array of communicative goals. To understand how the framework can be applied (and its limitations), we surveyed and interviewed members of the Data Visualization Society using their own visualizations as a probe. Through this study we identified the broad range of objectives in communicative visualizations and the prevalence of certain objective types.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا