ترغب بنشر مسار تعليمي؟ اضغط هنا

Uncertainty-Oriented Ensemble Data Visualization and Exploration using Variable Spatial Spreading

106   0   0.0 ( 0 )
 نشر من قبل Mingdong Zhang
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

As an important method of handling potential uncertainties in numerical simulations, ensemble simulation has been widely applied in many disciplines. Visualization is a promising and powerful ensemble simulation analysis method. However, conventional visualization methods mainly aim at data simplification and highlighting important information based on domain expertise instead of providing a flexible data exploration and intervention mechanism. Trial-and-error procedures have to be repeatedly conducted by such approaches. To resolve this issue, we propose a new perspective of ensemble data analysis using the attribute variable dimension as the primary analysis dimension. Particularly, we propose a variable uncertainty calculation method based on variable spatial spreading. Based on this method, we design an interactive ensemble analysis framework that provides a flexible interactive exploration of the ensemble data. Particularly, the proposed spreading curve view, the region stability heat map view, and the temporal analysis view, together with the commonly used 2D map view, jointly support uncertainty distribution perception, region selection, and temporal analysis, as well as other analysis requirements. We verify our approach by analyzing a real-world ensemble simulation dataset. Feedback collected from domain experts confirms the efficacy of our framework.



قيم البحث

اقرأ أيضاً

Background: It is possible to find many different visual representations of data values in visualizations, it is less common to see visual representations that include uncertainty, especially in visualizations intended for non-technical audiences. Ob jective: our aim is to rigorously define and evaluate the novel use of visual entropy as a measure of shape that allows us to construct an ordered scale of glyphs for use in representing both uncertainty and value in 2D and 3D environments. Method: We use sample entropy as a numerical measure of visual entropy to construct a set of glyphs using R and Blender which vary in their complexity. Results: A Bradley-Terry analysis of a pairwise comparison of the glyphs shows participants (n=19) ordered the glyphs as predicted by the visual entropy score (linear regression R2 >0.97, p<0.001). We also evaluate whether the glyphs can effectively represent uncertainty using a signal detection method, participants (n=15) were able to search for glyphs representing uncertainty with high sensitivity and low error rates. Conclusion: visual entropy is a novel cue for representing ordered data and provides a channel that allows the uncertainty of a measure to be presented alongside its mean value.
Marching squares (MS) and marching cubes (MC) are widely used algorithms for level-set visualization of scientific data. In this paper, we address the challenge of uncertainty visualization of the topology cases of the MS and MC algorithms for uncert ain scalar field data sampled on a uniform grid. The visualization of the MS and MC topology cases for uncertain data is challenging due to their exponential nature and the possibility of multiple topology cases per cell of a grid. We propose the topology case count and entropy-based techniques for quantifying uncertainty in the topology cases of the MS and MC algorithms when noise in data is modeled with probability distributions. We demonstrate the applicability of our techniques for independent and correlated uncertainty assumptions. We visualize the quantified topological uncertainty via color mapping proportional to uncertainty, as well as with interactive probability queries in the MS case and entropy isosurfaces in the MC case. We demonstrate the utility of our uncertainty quantification framework in identifying the isovalues exhibiting relatively high topological uncertainty. We illustrate the effectiveness of our techniques via results on synthetic, simulation, and hixel datasets.
85 - Matthew A. Petroff 2021
Color cycles, ordered sets of colors for data visualization, that balance aesthetics with accessibility considerations are presented. In order to model aesthetic preference, data were collected with an online survey, and the results were used to trai n a machine-learning model. To ensure accessibility, this model was combined with minimum-perceptual-distance constraints, including for simulated color-vision deficiencies, as well as with minimum-lightness-distance constraints for grayscale printing, maximum-lightness constraints for maintaining contrast with a white background, and scores from a color-saliency model for ease of use of the colors in verbal and written descriptions. Optimal color cycles containing six, eight, and ten colors were generated using the data-driven aesthetic-preference model and accessibility constraints. Due to the balance of aesthetics and accessibility considerations, the resulting color cycles can serve as reasonable defaults in data-plotting codes.
Multivariate spatial data plays an important role in computational science and engineering simulations. The potential features and hidden relationships in multivariate data can assist scientists to gain an in-depth understanding of a scientific proce ss, verify a hypothesis and further discover a new physical or chemical law. In this paper, we present a comprehensive survey of the state-of-the-art techniques for multivariate spatial data visualization. We first introduce the basic concept and characteristics of multivariate spatial data, and describe three main tasks in multivariate data visualization: feature classification, fusion visualization, and correlation analysis. Finally, we prospect potential research topics for multivariate data visualization according to the current research.
Data visualization is the process by which data of any size or dimensionality is processed to produce an understandable set of data in a lower dimensionality, allowing it to be manipulated and understood more easily by people. The goal of our paper i s to survey the performance of current high-dimensional data visualization techniques and quantify their strengths and weaknesses through relevant quantitative measures, including runtime, memory usage, clustering quality, separation quality, global structure preservation, and local structure preservation. To perform the analysis, we select a subset of state-of-the-art methods. Our work shows how the selected algorithms produce embeddings with unique qualities that lend themselves towards certain tasks, and how each of these algorithms are constrained by compute resources.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا