Do you want to publish a course? Click here

Narrative Transitions in Data Videos

87   0   0.0 ( 0 )
 Added by Junxiu Tang
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Transitions are widely used in data videos to seamlessly connect data-driven charts or connect visualizations and non-data-driven motion graphics. To inform the transition designs in data videos, we conduct a content analysis based on more than 3500 clips extracted from 284 data videos. We annotate visualization types and transition designs on these segments, and examine how these transitions help make connections between contexts. We propose a taxonomy of transitions in data videos, where two transition categories are defined in building fluent narratives by using visual variables.



rate research

Read More

66 - Qianwen Wang , Zhen Li , Siwei Fu 2019
Visual designs can be complex in modern data visualization systems, which poses special challenges for explaining them to the non-experts. However, few if any presentation tools are tailored for this purpose. In this study, we present Narvis, a slideshow authoring tool designed for introducing data visualizations to non-experts. Narvis targets two types of end-users: teachers, experts in data visualization who produce tutorials for explaining a data visualization, and students, non-experts who try to understand visualization designs through tutorials. We present an analysis of requirements through close discussions with the two types of end-users. The resulting considerations guide the design and implementation of Narvis. Additionally, to help teachers better organize their introduction slideshows, we specify a data visualization as a hierarchical combination of components, which are automatically detected and extracted by Narvis. The teachers craft an introduction slideshow through first organizing these components, and then explaining them sequentially. A series of templates are provided for adding annotations and animations to improve efficiency during the authoring process. We evaluate Narvis through a qualitative analysis of the authoring experience, and a preliminary evaluation of the generated slideshows.
Situation awareness (SA) is critical to improving takeover performance during the transition period from automated driving to manual driving. Although many studies measured SA during or after the driving task, few studies have attempted to predict SA in real time in automated driving. In this work, we propose to predict SA during the takeover transition period in conditionally automated driving using eye-tracking and self-reported data. First, a tree ensemble machine learning model, named LightGBM (Light Gradient Boosting Machine), was used to predict SA. Second, in order to understand what factors influenced SA and how, SHAP (SHapley Additive exPlanations) values of individual predictor variables in the LightGBM model were calculated. These SHAP values explained the prediction model by identifying the most important factors and their effects on SA, which further improved the model performance of LightGBM through feature selection. We standardized SA between 0 and 1 by aggregating three performance measures (i.e., placement, distance, and speed estimation of vehicles with regard to the ego-vehicle) of SA in recreating simulated driving scenarios, after 33 participants viewed 32 videos with six lengths between 1 and 20 s. Using only eye-tracking data, our proposed model outperformed other selected machine learning models, having a root-mean-squared error (RMSE) of 0.121, a mean absolute error (MAE) of 0.096, and a 0.719 correlation coefficient between the predicted SA and the ground truth. The code is available at https://github.com/refengchou/Situation-awareness-prediction. Our proposed model provided important implications on how to monitor and predict SA in real time in automated driving using eye-tracking data.
The popularity of racket sports (e.g., tennis and table tennis) leads to high demands for data analysis, such as notational analysis, on player performance. While sports videos offer many benefits for such analysis, retrieving accurate information from sports videos could be challenging. In this paper, we propose EventAnchor, a data analysis framework to facilitate interactive annotation of racket sports video with the support of computer vision algorithms. Our approach uses machine learning models in computer vision to help users acquire essential events from videos (e.g., serve, the ball bouncing on the court) and offers users a set of interactive tools for data annotation. An evaluation study on a table tennis annotation system built on this framework shows significant improvement of user performances in simple annotation tasks on objects of interest and complex annotation tasks requiring domain knowledge.
The study of electronic transitions within a molecule connected to the absorption or emission of light is a common task in the process of the design of new materials. The transitions are complex quantum mechanical processes and a detailed analysis requires a breakdown of these processes into components that can be interpreted via characteristic chemical properties. We approach these tasks by providing a detailed analysis of the electron density field. This entails methods to quantify and visualize electron localization and transfer from molecular subgroups combining spatial and abstract representations. The core of our method uses geometric segmentation of the electronic density field coupled with a graph-theoretic formulation of charge transfer between molecular subgroups. The design of the methods has been guided by the goal of providing a generic and objective analysis following fundamental concepts. We illustrate the proposed approach using several case studies involving the study of electronic transitions in different molecular systems.
We investigated the feasibility of crowdsourcing full-fledged tutorial videos from ordinary people on the Web on how to solve math problems related to logarithms. This kind of approach (a form of learnersourcing) to efficiently collecting tutorial videos and other learning resources could be useful for realizing personalized learning-at-scale, whereby students receive specific learning resources -- drawn from a large and diverse set -- that are tailored to their individual and time-varying needs. Results of our study, in which we collected 399 videos from 66 unique teachers on Mechanical Turk, suggest that (1) approximately 100 videos -- over $80%$ of which are mathematically fully correct -- can be crowdsourced per week for $5/video; (2) the crowdsourced videos exhibit significant diversity in terms of language style, presentation media, and pedagogical approach; (3) the average learning gains (posttest minus pretest score) associated with watching the videos was stat.~sig.~higher than for a control video ($0.105$ versus $0.045$); and (4) the average learning gains ($0.1416$) from watching the best tested crowdsourced videos was comparable to the learning gains ($0.1506$) from watching a popular Khan Academy video on logarithms.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا