ترغب بنشر مسار تعليمي؟ اضغط هنا

Design Patterns and Trade-Offs in Responsive Visualization for Communication

71   0   0.0 ( 0 )
 نشر من قبل Hyeok Kim
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Increased access to mobile devices motivates the need to design communicative visualizations that are responsive to varying screen sizes. However, relatively little design guidance or tooling is currently available to authors. We contribute a detailed characterization of responsive visualization strategies in communication-oriented visualizations, identifying 76 total strategies by analyzing 378 pairs of large screen (LS) and small screen (SS) visualizations from online articles and reports. Our analysis distinguishes between the Targets of responsive visualization, referring to what elements of a design are changed and Actions representing how targets are changed. We identify key trade-offs related to authors need to maintain graphical density, referring to the amount of information per pixel, while also maintaining the message or intended takeaways for users of a visualization. We discuss implications of our findings for future visualization tool design to support responsive transformation of visualization designs, including requirements for automated recommenders for communication-oriented responsive visualizations.

قيم البحث

اقرأ أيضاً

Space time cube representation is an information visualization technique where spatiotemporal data points are mapped into a cube. Fast and correct analysis of such information is important in for instance geospatial and social visualization applicati ons. Information visualization researchers have previously argued that space time cube representation is beneficial in revealing complex spatiotemporal patterns in a dataset to users. The argument is based on the fact that both time and spatial information are displayed simultaneously to users, an effect difficult to achieve in other representations. However, to our knowledge the actual usefulness of space time cube representation in conveying complex spatiotemporal patterns to users has not been empirically validated. To fill this gap we report on a between-subjects experiment comparing novice users error rates and response times when answering a set of questions using either space time cube or a baseline 2D representation. For some simple questions the error rates were lower when using the baseline representation. For complex questions where the participants needed an overall understanding of the spatiotemporal structure of the dataset, the space time cube representation resulted in on average twice as fast response times with no difference in error rates compared to the baseline. These results provide an empirical foundation for the hypothesis that space time cube representation benefits users when analyzing complex spatiotemporal patterns.
Unlike traditional file transfer where only total delay matters, streaming applications impose delay constraints on each packet and require them to be in order. To achieve fast in-order packet decoding, we have to compromise on the throughput. We stu dy this trade-off between throughput and smoothness in packet decoding. We first consider a point-to-point streaming and analyze how the trade-off is affected by the frequency of block-wise feedback, whereby the source receives full channel state feedback at periodic intervals. We show that frequent feedback can drastically improve the throughput-smoothness trade-off. Then we consider the problem of multicasting a packet stream to two users. For both point-to-point and multicast streaming, we propose a spectrum of coding schemes that span different throughput-smoothness tradeoffs. One can choose an appropriate coding scheme from these, depending upon the delay-sensitivity and bandwidth limitations of the application. This work introduces a novel style of analysis using renewal processes and Markov chains to analyze coding schemes.
Authors often transform a large screen visualization for smaller displays through rescaling, aggregation and other techniques when creating visualizations for both desktop and mobile devices (i.e., responsive visualization). However, transformations can alter relationships or patterns implied by the large screen view, requiring authors to reason carefully about what information to preserve while adjusting their design for the smaller display. We propose an automated approach to approximating the loss of support for task-oriented visualization insights (identification, comparison, and trend) in responsive transformation of a source visualization. We operationalize identification, comparison, and trend loss as objective functions calculated by comparing properties of the rendered source visualization to each realized target (small screen) visualization. To evaluate the utility of our approach, we train machine learning models on human ranked small screen alternative visualizations across a set of source visualizations. We find that our approach achieves an accuracy of 84% (random forest model) in ranking visualizations. We demonstrate this approach in a prototype responsive visualization recommender that enumerates responsive transformations using Answer Set Programming and evaluates the preservation of task-oriented insights using our loss measures. We discuss implications of our approach for the development of automated and semi-automated responsive visualization recommendation.
199 - Bowen Yu , Ye Yuan , Loren Terveen 2019
Artificial intelligence algorithms have been used to enhance a wide variety of products and services, including assisting human decision making in high-stakes contexts. However, these algorithms are complex and have trade-offs, notably between predic tion accuracy and fairness to population subgroups. This makes it hard for designers to understand algorithms and design products or services in a way that respects users goals, values, and needs. We proposed a method to help designers and users explore algorithms, visualize their trade-offs, and select algorithms with trade-offs consistent with their goals and needs. We evaluated our method on the problem of predicting criminal defendants likelihood to re-offend through (i) a large-scale Amazon Mechanical Turk experiment, and (ii) in-depth interviews with domain experts. Our evaluations show that our method can help designers and users of these systems better understand and navigate algorithmic trade-offs. This paper contributes a new way of providing designers the ability to understand and control the outcomes of algorithmic systems they are creating.
The Large Intelligent Surface (LIS) is a promising technology in the areas of wireless communication, remote sensing and positioning. It consists of a continuous radiating surface located in the proximity of the users, with the capability to communic ate by transmission and reception (replacing base stations). Despite of its potential, there are numerous challenges from implementation point of view, being the interconnection data-rate, computational complexity, and storage the most relevant ones. In order to address those challenges, hierarchical architectures with distributed processing techniques are envisioned to to be relevant for this task, while ensuring scalability. In this work we perform algorithm-architecture codesign to propose two distributed interference cancellation algorithms, and a tree-based interconnection topology for uplink processing. We also analyze the performance, hardware requirements, and architecture tradeoffs for a discrete LIS, in order to provide concrete case studies and guidelines for efficient implementation of LIS systems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا