ترغب بنشر مسار تعليمي؟ اضغط هنا

GPU-Based Interactive Visualization of Billion Point Cosmological Simulations

100   0   0.0 ( 0 )
 نشر من قبل Tamas Szalay
 تاريخ النشر 2008
والبحث باللغة English




اسأل ChatGPT حول البحث

Despite the recent advances in graphics hardware capabilities, a brute force approach is incapable of interactively displaying terabytes of data. We have implemented a system that uses hierarchical level-of-detailing for the results of cosmological simulations, in order to display visually accurate results without loading in the full dataset (containing over 10 billion points). The guiding principle of the program is that the user should not be able to distinguish what they are seeing from a full rendering of the original data. Furthermore, by using a tree-based system for levels of detail, the size of the underlying data is limited only by the capacity of the IO system containing it.



قيم البحث

اقرأ أيضاً

Visualization of large vector line data is a core task in geographic and cartographic systems. Vector maps are often displayed at different cartographic generalization levels, traditionally by using several discrete levels-of-detail (LODs). This limi ts the generalization levels to a fixed and predefined set of LODs, and generally does not support smooth LOD transitions. However, fast GPUs and novel line rendering techniques can be exploited to integrate dynamic vector map LOD management into GPU-based algorithms for locally-adaptive line simplification and real-time rendering. We propose a new technique that interactively visualizes large line vector datasets at variable LODs. It is based on the Douglas-Peucker line simplification principle, generating an exhaustive set of line segments whose specific subsets represent the lines at any variable LOD. At run time, an appropriate and view-dependent error metric supports screen-space adaptive LOD levels and the display of the correct subset of line segments accordingly. Our implementation shows that we can simplify and display large line datasets interactively. We can successfully apply line style patterns, dynamic LOD selection lenses, and anti-aliasing techniques to our line rendering.
A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations c annot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.
This paper introduces Polyphorm, an interactive visualization and model fitting tool that provides a novel approach for investigating cosmological datasets. Through a fast computational simulation method inspired by the behavior of Physarum polycepha lum, an unicellular slime mold organism that efficiently forages for nutrients, astrophysicists are able to extrapolate from sparse datasets, such as galaxy maps archived in the Sloan Digital Sky Survey, and then use these extrapolations to inform analyses of a wide range of other data, such as spectroscopic observations captured by the Hubble Space Telescope. Researchers can interactively update the simulation by adjusting model parameters, and then investigate the resulting visual output to form hypotheses about the data. We describe details of Polyphorms simulation model and its interaction and visualization modalities, and we evaluate Polyphorm through three scientific use cases that demonstrate the effectiveness of our approach.
We present a novel parallel algorithm for cloth simulation that exploits multiple GPUs for fast computation and the handling of very high resolution meshes. To accelerate implicit integration, we describe new parallel algorithms for sparse matrix-vec tor multiplication (SpMV) and for dynamic matrix assembly on a multi-GPU workstation. Our algorithms use a novel work queue generation scheme for a fat-tree GPU interconnect topology. Furthermore, we present a novel collision handling scheme that uses spatial hashing for discrete and continuous collision detection along with a non-linear impact zone solver. Our parallel schemes can distribute the computation and storage overhead among multiple GPUs and enable us to perform almost interactive simulation on complex cloth meshes, which can hardly be handled on a single GPU due to memory limitations. We have evaluated the performance with two multi-GPU workstations (with 4 and 8 GPUs, respectively) on cloth meshes with 0.5-1.65M triangles. Our approach can reliably handle the collisions and generate vivid wrinkles and folds at 2-5 fps, which is significantly faster than prior cloth simulation systems. We observe almost linear speedups with respect to the number of GPUs.
In this paper, we present a learning-based method to the keyframe-based video stylization that allows an artist to propagate the style from a few selected keyframes to the rest of the sequence. Its key advantage is that the resulting stylization is s emantically meaningful, i.e., specific parts of moving objects are stylized according to the artists intention. In contrast to previous style transfer techniques, our approach does not require any lengthy pre-training process nor a large training dataset. We demonstrate how to train an appearance translation network from scratch using only a few stylized exemplars while implicitly preserving temporal consistency. This leads to a video stylization framework that supports real-time inference, parallel processing, and random access to an arbitrary output frame. It can also merge the content from multiple keyframes without the need to perform an explicit blending operation. We demonstrate its practical utility in various interactive scenarios, where the user paints over a selected keyframe and sees her style transferred to an existing recorded sequence or a live video stream.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا