Do you want to publish a course? Click here

New Thinking on, and with, Data Visualization

58   0   0.0 ( 0 )
 Added by Thomas Robitaille
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

As the complexity and volume of datasets have increased along with the capabilities of modular, open-source, easy-to-implement, visualization tools, scientists need for, and appreciation of, data visualization has risen too. Until recently, scientists thought of the explanatory graphics created at a research projects conclusion as pretty pictures needed only for journal publication or public outreach. The plots and displays produced during a research project - often intended only for experts - were thought of as a separate category, what we here call exploratory visualization. In this view, discovery comes from exploratory visualization, and explanatory visualization is just for communication. Our aim in this paper is to spark conversation amongst scientists, computer scientists, outreach professionals, educators, and graphics and perception experts about how to foster flexible data visualization practices that can facilitate discovery and communication at the same time. We present an example of a new finding made using the glue visualization environment to demonstrate how the border between explanatory and exploratory visualization is easily traversed. The linked-view principles as well as the actual code in glue are easily adapted to astronomy, medicine, and geographical information science - all fields where combining, visualizing, and analyzing several high-dimensional datasets yields insight. Whether or not scientists can use such a flexible undisciplined environment to its fullest potential without special training remains to be seen. We conclude with suggestions for improving the training of scientists in visualization practices, and of computer scientists in the iterative, non-workflow-like, ways in which modern science is carried out.



rate research

Read More

We present a high-performance, graphics processing unit (GPU)-based framework for the efficient analysis and visualization of (nearly) terabyte (TB)-sized 3-dimensional images. Using a cluster of 96 GPUs, we demonstrate for a 0.5 TB image: (1) volume rendering using an arbitrary transfer function at 7--10 frames per second; (2) computation of basic global image statistics such as the mean intensity and standard deviation in 1.7 s; (3) evaluation of the image histogram in 4 s; and (4) evaluation of the global image median intensity in just 45 s. Our measured results correspond to a raw computational throughput approaching one teravoxel per second, and are 10--100 times faster than the best possible performance with traditional single-node, multi-core CPU implementations. A scalability analysis shows the framework will scale well to images sized 1 TB and beyond. Other parallel data analysis algorithms can be added to the framework with relative ease, and accordingly, we present our framework as a possible solution to the image analysis and visualization requirements of next-generation telescopes, including the forthcoming Square Kilometre Array pathfinder radiotelescopes.
Cosmography, the study and making of maps of the universe or cosmos, is a field where visual representation benefits from modern three-dimensional visualization techniques and media. At the extragalactic distance scales, visualization is contributing in understanding the complex structure of the local universe, in terms of spatial distribution and flows of galaxies and dark matter. In this paper, we report advances in the field of extragalactic cosmography obtained using the SDvision visualization software in the context of the Cosmicflows Project. Here, multiple visualization techniques are applied to a variety of data products: catalogs of galaxy positions and galaxy peculiar velocities, reconstructed velocity field, density field, gravitational potential field, velocity shear tensor viewed in terms of its eigenvalues and eigenvectors, envelope surfaces enclosing basins of attraction. These visualizations, implemented as high-resolution images, videos, and interactive viewers, have contributed to a number of studies: the cosmography of the local part of the universe, the nature of the Great Attractor, the discovery of the boundaries of our home supercluster of galaxies Laniakea, the mapping of the cosmic web, the study of attractors and repellers.
Most modern astrophysical datasets are multi-dimensional; a characteristic that can nowadays generally be conserved and exploited scientifically during the data reduction/simulation and analysis cascades. Yet, the same multi-dimensional datasets are systematically cropped, sliced and/or projected to printable two-dimensional (2-D) diagrams at the publication stage. In this article, we introduce the concept of the X3D pathway as a mean of simplifying and easing the access to data visualization and publication via three-dimensional (3-D) diagrams. The X3D pathway exploits the facts that 1) the X3D 3-D file format lies at the center of a product tree that includes interactive HTML documents, 3-D printing, and high-end animations, and 2) all high-impact-factor & peer-reviewed journals in Astrophysics are now published (some exclusively) online. We argue that the X3D standard is an ideal vector for sharing multi-dimensional datasets, as it provides direct access to a range of different data visualization techniques, is fully-open source, and is a well defined ISO standard. Unlike other earlier propositions to publish multi-dimensional datasets via 3-D diagrams, the X3D pathway is not tied to specific software (prone to rapid and unexpected evolution), but instead compatible with a range of open-source software already in use by our community. The interactive HTML branch of the X3D pathway is also actively supported by leading peer-reviewed journals in the field of Astrophysics. Finally, this article provides interested readers with a detailed set of practical astrophysical examples designed to act as a stepping stone towards the implementation of the X3D pathway for any other dataset.
We report on an exploratory project aimed at performing immersive 3D visualization of astronomical data, starting with spectral-line radio data cubes from galaxies. This work is done as a collaboration between the Department of Physics and Astronomy and the Department of Computer Science at the University of Manitoba. We are building our prototype using the 3D engine Unity, because of its ease of use for integration with advanced displays such as a CAVE environment, a zSpace tabletop, or virtual reality headsets. We address general issues regarding 3D visualization, such as: load and convert astronomy data, perform volume rendering on the GPU, and produce physically meaningful visualizations using principles of visual literacy. We discuss some challenges to be met when designing a user interface that allows us to take advantage of this new way of exploring data. We hope to lay the foundations for an innovative framework useful for all astronomers who use spectral line data cubes, and encourage interested parties to join our efforts. This pilot project addresses the challenges presented by frontier astronomy experiments, such as the Square Kilometre Array and its precursors.
With the next-generation Timepix3 hybrid pixel detector, new possibilities and challenges have arisen. The Timepix3 segments active sensor area of 1.98 $cm^2$ into a square matrix of 256 x 256 pixels. In each pixel, the Time of Arrival (ToA, with a time binning of 1.56 $ns$) and Time over Threshold (ToT, energy) are measured simultaneously in a data-driven, i.e. self-triggered, read-out scheme. This contribution presents a framework for data acquisition, real-time clustering, visualization, classification and data saving. All of these tasks can be performed online, directly from multiple readouts through UDP protocol. Clusters are reconstructed on a pixel-by-pixel decision from the stream of not-necessarily chronologically sorted pixel data. To achieve quick spatial pixel-to-cluster matching, non-trivial data structures (quadtree) are utilized. Furthermore, parallelism (i.e multi-threaded architecture) is used to further improve the performance of the framework. Such real-time clustering offers the advantages of online filtering and classification of events. Versatility of the software is ensured by supporting all major operating systems (macOS, Windows and Linux) with both graphical and command-line interfaces. The performance of the real-time clustering and applied filtration methods are demonstrated using data from the Timepix3 network installed in the ATLAS and MoEDAL experiments at CERN.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا