ترغب بنشر مسار تعليمي؟ اضغط هنا

Making the Case for Visualization

95   0   0.0 ( 0 )
 نشر من قبل Jacqueline Faherty
 تاريخ النشر 2019
والبحث باللغة English
 تأليف Robert Hurt




اسأل ChatGPT حول البحث

Visual representation of information is a fundamental tool for advancing our understanding of science. It enables the research community to extract new knowledge from complex datasets, and plays an equally vital role in communicating new results across a spectrum of public audiences. Visualizations which make research results accessible to the public have been popularized by the press, and are used in formal education, informal learning settings, and all aspects of lifelong learning. In particular, visualizations of astronomical data (hereafter astrovisualization or astroviz) have broadly captured the human imagination, and are in high demand. Astrovisualization practitioners need a wide variety of specialized skills and expertise spanning multiple disciplines (art, science, technology). As astrophysics research continues to evolve into a more data rich science, astroviz is also evolving from artists conceptions to data-driven visualizations, from two-dimensional images to three-dimensional prints, requiring new skills for development. Currently astroviz practitioners are spread throughout the country. Due to the specialized nature of the field there are seldom enough practitioners at one location to form an effective research group for the exchange of knowledge on best practices and new techniques. Because of the increasing importance of visualization in modern astrophysics, the fact that the astroviz community is small and spread out in disparate locations, and the rapidly evolving nature of this field, we argue for the creation and nurturing of an Astroviz Community of Practice. We first summarize our recommendations. We then describe the current make-up of astrovisualization practitioners, give an overview of the audiences they serve, and highlight technological considerations.

قيم البحث

اقرأ أيضاً

We present a state-of-the-art report on visualization in astrophysics. We survey representative papers from both astrophysics and visualization and provide a taxonomy of existing approaches based on data analysis tasks. The approaches are classified based on five categories: data wrangling, data exploration, feature identification, object reconstruction, as well as education and outreach. Our unique contribution is to combine the diverse viewpoints from both astronomers and visualization experts to identify challenges and opportunities for visualization in astrophysics. The main goal is to provide a reference point to bring modern data analysis and visualization techniques to the rich datasets in astrophysics.
This paper introduces Polyphorm, an interactive visualization and model fitting tool that provides a novel approach for investigating cosmological datasets. Through a fast computational simulation method inspired by the behavior of Physarum polycepha lum, an unicellular slime mold organism that efficiently forages for nutrients, astrophysicists are able to extrapolate from sparse datasets, such as galaxy maps archived in the Sloan Digital Sky Survey, and then use these extrapolations to inform analyses of a wide range of other data, such as spectroscopic observations captured by the Hubble Space Telescope. Researchers can interactively update the simulation by adjusting model parameters, and then investigate the resulting visual output to form hypotheses about the data. We describe details of Polyphorms simulation model and its interaction and visualization modalities, and we evaluate Polyphorm through three scientific use cases that demonstrate the effectiveness of our approach.
While visualizations play a crucial role in gaining insights from data, generating useful visualizations from a complex dataset is far from an easy task. Besides understanding the functionality provided by existing visualization libraries, generating the desired visualization also requires reshaping and aggregating the underlying data as well as composing different visual elements to achieve the intended visual narrative. This paper aims to simplify visualization tasks by automatically synthesizing the required program from simple visual sketches provided by the user. Specifically, given an input data set and a visual sketch that demonstrates how to visualize a very small subset of this data, our technique automatically generates a program that can be used to visualize the entire data set. Automating visualization poses several challenges. First, because many visualization tasks require data wrangling in addition to generating plots, we need to decompose the end-to-end synthesis task into two separate sub-problems. Second, because the intermediate specification that results from the decomposition is necessarily imprecise, this makes the data wrangling task particularly challenging in our context. In this paper, we address these problems by developing a new compositional visualization-by-example technique that (a) decomposes the end-to-end task into two different synthesis problems over different DSLs and (b) leverages bi-directional program analysis to deal with the complexity that arises from having an imprecise intermediate specification. We implemented our visualization-by-example algorithm and evaluate it on 83 visualization tasks collected from on-line forums and tutorials. Viser can solve 84% of these benchmarks within a 600 second time limit, and, for those tasks that can be solved, the desired visualization is among the top-5 generated by Viser in 70% of the cases.
153 - Xin Qian , Ryan A. Rossi , Fan Du 2021
Visualization recommendation work has focused solely on scoring visualizations based on the underlying dataset and not the actual user and their past visualization feedback. These systems recommend the same visualizations for every user, despite that the underlying user interests, intent, and visualization preferences are likely to be fundamentally different, yet vitally important. In this work, we formally introduce the problem of personalized visualization recommendation and present a generic learning framework for solving it. In particular, we focus on recommending visualizations personalized for each individual user based on their past visualization interactions (e.g., viewed, clicked, manually created) along with the data from those visualizations. More importantly, the framework can learn from visualizations relevant to other users, even if the visualizations are generated from completely different datasets. Experiments demonstrate the effectiveness of the approach as it leads to higher quality visualization recommendations tailored to the specific user intent and preferences. To support research on this new problem, we release our user-centric visualization corpus consisting of 17.4k users exploring 94k datasets with 2.3 million attributes and 32k user-generated visualizations.
Standard lossy image compression algorithms aim to preserve an images appearance, while minimizing the number of bits needed to transmit it. However, the amount of information actually needed by a user for downstream tasks -- e.g., deciding which pro duct to click on in a shopping website -- is likely much lower. To achieve this lower bitrate, we would ideally only transmit the visual features that drive user behavior, while discarding details irrelevant to the users decisions. We approach this problem by training a compression model through human-in-the-loop learning as the user performs tasks with the compressed images. The key insight is to train the model to produce a compressed image that induces the user to take the same action that they would have taken had they seen the original image. To approximate the loss function for this model, we train a discriminator that tries to distinguish whether a users action was taken in response to the compressed image or the original. We evaluate our method through experiments with human participants on four tasks: reading handwritten digits, verifying photos of faces, browsing an online shopping catalogue, and playing a car racing video game. The results show that our method learns to match the users actions with and without compression at lower bitrates than baseline methods, and adapts the compression model to the users behavior: it preserves the digit number and randomizes handwriting style in the digit reading task, preserves hats and eyeglasses while randomizing faces in the photo verification task, preserves the perceived price of an item while randomizing its color and background in the online shopping task, and preserves upcoming bends in the road in the car racing game.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا