ترغب بنشر مسار تعليمي؟ اضغط هنا

Helium: Visualization of Large Scale Plant Pedigrees

169   0   0.0 ( 0 )
 نشر من قبل Paul Shaw
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Background: Plant breeders are utilising an increasingly diverse range of data types in order to identify lines that have desirable characteristics which are suitable to be taken forward in plant breeding programmes. There are a number of key morphological and physiological traits such as disease resistance and yield that are required to be maintained, and improved upon if a commercial variety is to be successful. Computational tools that provide the ability to pull this data together, and integrate with pedigree structure, will enable breeders to make better decisions on which plant lines are used in crossings to meet both critical demands for increased yield/production and adaptation to climate change. Results: We have used a large and unique set of experimental barley (H. vulgare) data to develop a prototype pedigree visualization system and performed a subjective user evaluation with domain experts to guide and direct the development of an interactive pedigree visualization tool which we have called Helium. Conclusions: We show that Helium allows users to easily integrate a number of data types along with large plant pedigrees to offer an integrated environment in which they can explore pedigree data. We have also verified that users were happy with the abstract representation of pedigrees that we have used in our visualization tool.

قيم البحث

اقرأ أيضاً

Researchers currently rely on ad hoc datasets to train automated visualization tools and evaluate the effectiveness of visualization designs. These exemplars often lack the characteristics of real-world datasets, and their one-off nature makes it dif ficult to compare different techniques. In this paper, we present VizNet: a large-scale corpus of over 31 million datasets compiled from open data repositories and online visualization galleries. On average, these datasets comprise 17 records over 3 dimensions and across the corpus, we find 51% of the dimensions record categorical data, 44% quantitative, and only 5% temporal. VizNet provides the necessary common baseline for comparing visualization design techniques, and developing benchmark models and algorithms for automating visual analysis. To demonstrate VizNets utility as a platform for conducting online crowdsourced experiments at scale, we replicate a prior study assessing the influence of user task and data distribution on visual encoding effectiveness, and extend it by considering an additional task: outlier detection. To contend with running such studies at scale, we demonstrate how a metric of perceptual effectiveness can be learned from experimental results, and show its predictive power across test datasets.
State-of-the-art open network visualization tools like Gephi, KeyLines, and Cytoscape are not suitable for studying street networks with thousands of roads since they do not support simultaneously polylines for edges, navigable maps, GPU-accelerated rendering, interactivity, and the means for visualizing multivariate data. The present paper presents Dash Sylvereye: a new Python library to produce interactive visualizations of primal street networks on top of tiled web maps to fill this gap. Dash Sylvereye can render large street graphs in commodity computers by exploiting WebGL for GPU acceleration. Dash Sylvereye also provides convenient functions to easily import OpenStreetMap street topologies obtained with the OSMnx library. Thanks to its integration with the Dash framework, Dash Sylvereye can be used to develop web dashboards around temporal and multivariate street data by coordinating the various elements of a Dash Sylvereye visualization with other plotting and UI components provided by Dash. We conduct experiments to assess the performance of Dash Sylvereye on a commodity computer in terms of animation CPU time and frames per second. To further illustrate the features of Dash Sylvereye, we also describe a web dashboard application that exploits Dash Sylvereye for the analysis of a SUMO vehicle traffic simulation.
Dynamic networks can be challenging to analyze visually, especially if they span a large time range during which new nodes and edges can appear and disappear. Although it is straightforward to provide interfaces for visualization that represent multi ple states of the network (i.e., multiple timeslices) either simultaneously (e.g., through small multiples) or interactively (e.g., through interactive animation), these interfaces might not support tasks in which disjoint timeslices need to be compared. Since these tasks are key for understanding the dynamic aspects of the network, understanding which interactive visualizations best support these tasks is important. We present the results of a series of laboratory experiments comparing two traditional approaches (small multiples and interactive animation), with a more recent approach based on interactive timeslicing. The tasks were performed on a large display through a touch interface. Participants completed 24 trials of three tasks with all techniques. The results show that interactive timeslicing brings benefit when comparing distant points in time, but less benefits when analyzing contiguous intervals of time.
In many real-world strategic settings, people use information displays to make decisions. In these settings, an information provider chooses which information to provide to strategic agents and how to present it, and agents formulate a best response based on the information and their anticipation of how others will behave. We contribute the results of a controlled online experiment to examine how the provision and presentation of information impacts peoples decisions in a congestion game. Our experiment compares how different visualization approaches for displaying this information, including bar charts and hypothetical outcome plots, and different information conditions, including where the visualized information is private versus public (i.e., available to all agents), affect decision making and welfare. We characterize the effects of visualization anticipation, referring to changes to behavior when an agent goes from alone having access to a visualization to knowing that others also have access to the visualization to guide their decisions. We also empirically identify the visualization equilibrium, i.e., the visualization for which the visualized outcome of agents decisions matches the realized decisions of the agents who view it. We reflect on the implications of visualization equilibria and visualization anticipation for designing information displays for real-world strategic settings.
Accurate hand pose estimation at joint level has several uses on human-robot interaction, user interfacing and virtual reality applications. Yet, it currently is not a solved problem. The novel deep learning techniques could make a great improvement on this matter but they need a huge amount of annotated data. The hand pose datasets released so far present some issues that make them impossible to use on deep learning methods such as the few number of samples, high-level abstraction annotations or samples consisting in depth maps. In this work, we introduce a multiview hand pose dataset in which we provide color images of hands and different kind of annotations for each, i.e the bounding box and the 2D and 3D location on the joints in the hand. Besides, we introduce a simple yet accurate deep learning architecture for real-time robust 2D hand pose estimation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا