ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep Colormap Extraction from Visualizations

95   0   0.0 ( 0 )
 نشر من قبل Linping Yuan
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This work presents a new approach based on deep learning to automatically extract colormaps from visualizations. After summarizing colors in an input visualization image as a Lab color histogram, we pass the histogram to a pre-trained deep neural network, which learns to predict the colormap that produces the visualization. To train the network, we create a new dataset of 64K visualizations that cover a wide variety of data distributions, chart types, and colormaps. The network adopts an atrous spatial pyramid pooling module to capture color features at multiple scales in the input color histograms. We then classify the predicted colormap as discrete or continuous and refine the predicted colormap based on its color histogram. Quantitative comparisons to existing methods show the superior performance of our approach on both synthetic and real-world visualizations. We further demonstrate the utility of our method with two use cases,i.e., color transfer and color remapping.

قيم البحث

اقرأ أيضاً

Scatterplots are one of the simplest and most commonly-used visualizations for understanding quantitative, multidimensional data. However, since scatterplots only depict two attributes at a time, analysts often need to manually generate and inspect l arge numbers of scatterplots to make sense of large datasets with many attributes. We present a visual query system for scatterplots, SCATTERSEARCH, that enables users to visually search and browse through large collections of scatterplots. Users can query for other visualizations based on a region of interest or find other scatterplots that look similar to a selected one. We present two demo scenarios, provide a system overview of SCATTERSEARCH, and outline future directions.
86 - Eytan Adar , Elsie Lee 2020
Significant research has provided robust task and evaluation languages for the analysis of exploratory visualizations. Unfortunately, these taxonomies fail when applied to communicative visualizations. Instead, designers often resort to evaluating co mmunicative visualizations from the cognitive efficiency perspective: can the recipient accurately decode my message/insight? However, designers are unlikely to be satisfied if the message went in one ear and out the other. The consequence of this inconsistency is that it is difficult to design or select between competing options in a principled way. The problem we address is the fundamental mismatch between how designers want to describe their intent, and the language they have. We argue that visualization designers can address this limitation through a learning lens: that the recipient is a student and the designer a teacher. By using learning objectives, designers can better define, assess, and compare communicative visualizations. We illustrate how the learning-based approach provides a framework for understanding a wide array of communicative goals. To understand how the framework can be applied (and its limitations), we surveyed and interviewed members of the Data Visualization Society using their own visualizations as a probe. Through this study we identified the broad range of objectives in communicative visualizations and the prevalence of certain objective types.
Feedback tools help people to monitor information about themselves to improve their health, sustainability practices, or personal well-being. Yet reasoning about personal data (e.g., pedometer counts, blood pressure readings, or home electricity cons umption) to gain a deep understanding of your current practices and how to change can be challenging with the data alone. We integrate quantitative feedback data within a personal digital calendar; this approach aims to make the feedback data readily accessible and more comprehensible. We report on an eight-week field study of an on-calendar visualization tool. Results showed that a personal calendar can provide rich context for people to reason about their feedback data. The on-calendar visualization enabled people to quickly identify and reason about regular patterns and anomalies. Based on our results, we also derived a model of the behavior feedback process that extends existing technology adoption models. With that, we reflected on potential barriers for the ongoing use of feedback tools.
Set systems are used to model data that naturally arises in many contexts: social networks have communities, musicians have genres, and patients have symptoms. Visualizations that accurately reflect the information in the underlying set system make i t possible to identify the set elements, the sets themselves, and the relationships between the sets. In static contexts, such as print media or infographics, it is necessary to capture this information without the help of interactions. With this in mind, we consider three different systems for medium-sized set data, LineSets, EulerView, and MetroSets, and report the results of a controlled human-subjects experiment comparing their effectiveness. Specifically, we evaluate the performance, in terms of time and error, on tasks that cover the spectrum of static set-based tasks. We also collect and analyze qualitative data about the three different visualization systems. Our results include statistically significant differences, suggesting that MetroSets performs and scales better.
278 - Alex Kale , Yifan Wu , 2021
Analysts often make visual causal inferences about possible data-generating models. However, visual analytics (VA) software tends to leave these models implicit in the mind of the analyst, which casts doubt on the statistical validity of informal vis ual insights. We formally evaluate the quality of causal inferences from visualizations by adopting causal support -- a Bayesian cognition model that learns the probability of alternative causal explanations given some data -- as a normative benchmark for causal inferences. We contribute two experiments assessing how well crowdworkers can detect (1) a treatment effect and (2) a confounding relationship. We find that chart users causal inferences tend to be insensitive to sample size such that they deviate from our normative benchmark. While interactively cross-filtering data in visualizations can improve sensitivity, on average users do not perform reliably better with common visualizations than they do with textual contingency tables. These experiments demonstrate the utility of causal support as an evaluation framework for inferences in VA and point to opportunities to make analysts mental models more explicit in VA software.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا