ترغب بنشر مسار تعليمي؟ اضغط هنا

Clustering-informed Cinematic Astrophysical Data Visualization with Application to the Moon-forming Terrestrial Synestia

67   0   0.0 ( 0 )
 نشر من قبل Patrick Aleo
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Scientific visualization tools are currently not optimized to create cinematic, production-quality representations of numerical data for the purpose of science communication. In our pipeline texttt{Estra}, we outline a step-by-step process from a raw simulation into a finished render as a way to teach non-experts in the field of visualization how to achieve production-quality outputs on their own. We demonstrate feasibility of using the visual effects software Houdini for cinematic astrophysical data visualization, informed by machine learning clustering algorithms. To demonstrate the capabilities of this pipeline, we used a post-impact, thermally-equilibrated Moon-forming synestia from cite{Lock18}. Our approach aims to identify physically interpretable clusters, where clusters identified in an appropriate phase space (e.g. here we use a temperature-entropy phase-space) correspond to physically meaningful structures within the simulation data. Clustering results can then be used to highlight these structures by informing the color-mapping process in a simplified Houdini software shading network, where dissimilar phase-space clusters are mapped to different color values for easier visual identification. Cluster information can also be used in 3D position space, via Houdinis Scene View, to aid in physical cluster finding, simulation prototyping, and data exploration. Our clustering-based renders are compared to those created by the Advanced Visualization Lab (AVL) team for the full dome show Imagine the Moon as proof of concept. With texttt{Estra}, scientists have a tool to create their own production-quality, data-driven visualizations.



قيم البحث

اقرأ أيضاً

We present a high-performance, graphics processing unit (GPU)-based framework for the efficient analysis and visualization of (nearly) terabyte (TB)-sized 3-dimensional images. Using a cluster of 96 GPUs, we demonstrate for a 0.5 TB image: (1) volume rendering using an arbitrary transfer function at 7--10 frames per second; (2) computation of basic global image statistics such as the mean intensity and standard deviation in 1.7 s; (3) evaluation of the image histogram in 4 s; and (4) evaluation of the global image median intensity in just 45 s. Our measured results correspond to a raw computational throughput approaching one teravoxel per second, and are 10--100 times faster than the best possible performance with traditional single-node, multi-core CPU implementations. A scalability analysis shows the framework will scale well to images sized 1 TB and beyond. Other parallel data analysis algorithms can be added to the framework with relative ease, and accordingly, we present our framework as a possible solution to the image analysis and visualization requirements of next-generation telescopes, including the forthcoming Square Kilometre Array pathfinder radiotelescopes.
A moon or natural satellite is a celestial body that orbits a planetary body such as a planet, dwarf planet, or an asteroid. Scientists seek understanding the origin and evolution of our solar system by studying moons of these bodies. Additionally, s earches for satellites of planetary bodies can be important to protect the safety of a spacecraft as it approaches or orbits a planetary body. If a satellite of a celestial body is found, the mass of that body can also be calculated once its orbit is determined. Ensuring the Dawn spacecrafts safety on its mission to the asteroid (4) Vesta primarily motivated the work of Dawns Satellite Working Group (SWG) in summer of 2011. Dawn mission scientists and engineers utilized various computational tools and techniques for Vestas satellite search. The objectives of this paper are to 1) introduce the natural satellite search problem, 2) present the computational challenges, approaches, and tools used when addressing this problem, and 3) describe applications of various image processing and computational algorithms for performing satellite searches to the electronic imaging and computer science community. Furthermore, we hope that this communication would enable Dawn mission scientists to improve their satellite search algorithms and tools and be better prepared for performing the same investigation in 2015, when the spacecraft is scheduled to approach and orbit the dwarf planet (1) Ceres.
153 - D. del Ser , O. Fors , J. Nu~nez 2014
Certain instrumental effects and data reduction anomalies introduce systematic errors in photometric time-series. Detrending algorithms such as the Trend Filtering Algorithm (TFA) (Kov{a}cs et al. 2004) have played a key role in minimizing the effect s caused by these systematics. Here we present the results obtained after applying the TFA, Savitszky-Golay (Savitzky & Golay 1964) detrending algorithms and the Box Least Square phase folding algorithm (Kov{a}cs et al. 2002) to the TFRM-PSES data (Fors et al. 2013). Tests performed on this data show that by applying these two filtering methods together, the photometric RMS is on average improved by a factor of 3-4, with better efficiency towards brighter magnitudes, while applying TFA alone yields an improvement of a factor 1-2. As a result of this improvement, we are able to detect and analyze a large number of stars per TFRM-PSES field which present some kind of variability. Also, after porting these algorithms to Python and parallelizing them, we have improved, even for large data samples, the computing performance of the overall detrending+BLS algorithm by a factor of $sim$10 with respect to Kov{a}cs et al. (2004).
We have re-analyzed the stability of pulse arrival times from pulsars and white dwarfs using several analysis tools for measuring the noise characteristics of sampled time and frequency data. We show that the best terrestrial artificial clocks substa ntially exceed the performance of astronomical sources as time-keepers in terms of accuracy (as defined by cesium primary frequency standards) and stability. This superiority in stability can be directly demonstrated over time periods up to two years, where there is high quality data for both. Beyond 2 years there is a deficiency of data for clock/clock comparisons and both terrestrial and astronomical clocks show equal performance being equally limited by the quality of the reference timescales used to make the comparisons. Nonetheless, we show that detailed accuracy evaluations of modern terrestrial clocks imply that these new clocks are likely to have a stability better than any astronomical source up to comparison times of at least hundreds of years. This article is intended to provide a correct appreciation of the relative merits of natural and artificial clocks. The use of natural clocks as tests of physics under the most extreme conditions is entirely appropriate; however, the contention that these natural clocks, particularly white dwarfs, can compete as timekeepers against devices constructed by mankind is shown to be doubtful.
The scientific community is presently witnessing an unprecedented growth in the quality and quantity of data sets coming from simulations and real-world experiments. To access effectively and extract the scientific content of such large-scale data se ts (often sizes are measured in hundreds or even millions of Gigabytes) appropriate tools are needed. Visual data exploration and discovery is a robust approach for rapidly and intuitively inspecting large-scale data sets, e.g. for identifying new features and patterns or isolating small regions of interest within which to apply time-consuming algorithms. This paper presents a high performance parallelized implementation of Splotch, our previously developed visual data exploration and discovery algorithm for large-scale astrophysical data sets coming from particle-based simulations. Splotch has been improved in order to exploit modern massively parallel architectures, e.g. multicore CPUs and CUDA-enabled GPUs. We present performance and scalability benchmarks on a number of test cases, demonstrating the ability of our high performance parallelized Splotch to handle efficiently large-scale data sets, such as the outputs of the Millennium II simulation, the largest cosmological simulation ever performed.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا