ترغب بنشر مسار تعليمي؟ اضغط هنا

SightBi: Exploring Cross-View Data Relationships with Biclusters

97   0   0.0 ( 0 )
 نشر من قبل Maoyuan Sun
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Multiple-view visualization (MV) has been heavily used in visual analysis tools for sensemaking of data in various domains (e.g., bioinformatics, cybersecurity and text analytics). One common task of visual analysis with multiple views is to relate data across different views. For example, to identify threats, an intelligence analyst needs to link people from a social network graph with locations on a crime-map, and then search for and read relevant documents. Currently, exploring cross-view data relationships heavily relies on view-coordination techniques (e.g., brushing and linking), which may require significant user effort on many trial-and-error attempts, such as repetitiously selecting elements in one view, and then observing and following elements highlighted in other views. To address this, we present SightBi, a visual analytics approach for supporting cross-view data relationship explorations. We discuss the design rationale of SightBi in detail, with identified user tasks regarding the use of cross-view data relationships. SightBi formalizes cross-view data relationships as biclusters, computes them from a dataset, and uses a bi-context design that highlights creating stand-alone relationship-views. This helps preserve existing views and offers an overview of cross-view data relationships to guide user exploration. Moreover, SightBi allows users to interactively manage the layout of multiple views by using newly created relationship-views. With a usage scenario, we demonstrate the usefulness of SightBi for sensemaking of cross-view data relationships.



قيم البحث

اقرأ أيضاً

Designing future IoT ecosystems requires new approaches and perspectives to understand everyday practices. While researchers recognize the importance of understanding social aspects of everyday objects, limited studies have explored the possibilities of combining data-driven patterns with human interpretations to investigate emergent relationships among objects. This work presents Thing Constellation Visualizer (thingCV), a novel interactive tool for visualizing the social network of objects based on their co-occurrence as computed from a large collection of photos. ThingCV enables perspective-changing design explorations over the network of objects with scalable links. Two exploratory workshops were conducted to investigate how designers navigate and make sense of a network of objects through thingCV. The results of eight participants showed that designers were actively engaged in identifying interesting objects and their associated clusters of related objects. The designers projected social qualities onto the identified objects and their communities. Furthermore, the designers changed their perspectives to revisit familiar contexts and to generate new insights through the exploration process. This work contributes a novel approach to combining data-driven models with designerly interpretations of thing constellation towards More-Than Human-Centred Design of IoT ecosystems.
Virtual Reality (VR) enables users to collaborate while exploring scenarios not realizable in the physical world. We propose CollabVR, a distributed multi-user collaboration environment, to explore how digital content improves expression and understa nding of ideas among groups. To achieve this, we designed and examined three possible configurations for participants and shared manipulable objects. In configuration (1), participants stand side-by-side. In (2), participants are positioned across from each other, mirrored face-to-face. In (3), called eyes-free, participants stand side-by-side looking at a shared display, and draw upon a horizontal surface. We also explored a telepathy mode, in which participants could see from each others point of view. We implemented 3DSketch visual objects for participants to manipulate and move between virtual content boards in the environment. To evaluate the system, we conducted a study in which four people at a time used each of the three configurations to cooperate and communicate ideas with each other. We have provided experimental results and interview responses.
69 - Aoyu Wu , Yun Wang , Xinhuan Shu 2021
Visualizations themselves have become a data format. Akin to other data formats such as text and images, visualizations are increasingly created, stored, shared, and (re-)used with artificial intelligence (AI) techniques. In this survey, we probe the underlying vision of formalizing visualizations as an emerging data format and review the recent advance in applying AI techniques to visualization data (AI4VIS). We define visualization data as the digital representations of visualizations in computers and focus on data visualization (e.g., charts and infographics). We build our survey upon a corpus spanning ten different fields in computer science with an eye toward identifying important common interests. Our resulting taxonomy is organized around WHAT is visualization data and its representation, WHY and HOW to apply AI to visualization data. We highlight a set of common tasks that researchers apply to the visualization data and present a detailed discussion of AI approaches developed to accomplish those tasks. Drawing upon our literature review, we discuss several important research questions surrounding the management and exploitation of visualization data, as well as the role of AI in support of those processes. We make the list of surveyed papers and related material available online at ai4vis.github.io.
99 - Peng Xie , Wenyuan Tao , Jie Li 2021
Multi-dimensional data exploration is a classic research topic in visualization. Most existing approaches are designed for identifying record patterns in dimensional space or subspace. In this paper, we propose a visual analytics approach to explorin g subset patterns. The core of the approach is a subset embedding network (SEN) that represents a group of subsets as uniformly-formatted embeddings. We implement the SEN as multiple subnets with separate loss functions. The design enables to handle arbitrary subsets and capture the similarity of subsets on single features, thus achieving accurate pattern exploration, which in most cases is searching for subsets having similar values on few features. Moreover, each subnet is a fully-connected neural network with one hidden layer. The simple structure brings high training efficiency. We integrate the SEN into a visualization system that achieves a 3-step workflow. Specifically, analysts (1) partition the given dataset into subsets, (2) select portions in a projected latent space created using the SEN, and (3) determine the existence of patterns within selected subsets. Generally, the system combines visualizations, interactions, automatic methods, and quantitative measures to balance the exploration flexibility and operation efficiency, and improve the interpretability and faithfulness of the identified patterns. Case studies and quantitative experiments on multiple open datasets demonstrate the general applicability and effectiveness of our approach.
341 - Ingmar Steiner 2012
The importance of modeling speech articulation for high-quality audiovisual (AV) speech synthesis is widely acknowledged. Nevertheless, while state-of-the-art, data-driven approaches to facial animation can make use of sophisticated motion capture te chniques, the animation of the intraoral articulators (viz. the tongue, jaw, and velum) typically makes use of simple rules or viseme morphing, in stark contrast to the otherwise high quality of facial modeling. Using appropriate speech production data could significantly improve the quality of articulatory animation for AV synthesis.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا