ترغب بنشر مسار تعليمي؟ اضغط هنا

Interchanging Interactive 3-d Graphics for Astronomy

109   0   0.0 ( 0 )
 نشر من قبل Christopher Fluke
 تاريخ النشر 2008
والبحث باللغة English




اسأل ChatGPT حول البحث

We demonstrate how interactive, three-dimensional (3-d) scientific visualizations can be efficiently interchanged between a variety of mediums. Through the use of an appropriate interchange format, and a unified interaction interface, we minimize the effort to produce visualizations appropriate for undertaking knowledge discovery at the astronomers desktop, as part of conference presentations, in digital publications or as Web content. We use examples from cosmological visualization to address some of the issues of interchange, and to describe our approach to adapting S2PLOT desktop visualizations to the Web. Supporting demonstrations are available at http://astronomy.swin.edu.au/s2plot/interchange/



قيم البحث

اقرأ أيضاً

Astronomy is entering a new era of discovery, coincident with the establishment of new facilities for observation and simulation that will routinely generate petabytes of data. While an increasing reliance on automated data analysis is anticipated, a critical role will remain for visualization-based knowledge discovery. We have investigated scientific visualization applications in astronomy through an examination of the literature published during the last two decades. We identify the two most active fields for progress - visualization of large-N particle data and spectral data cubes - discuss open areas of research, and introduce a mapping between astronomical sources of data and data representations used in general purpose visualization tools. We discuss contributions using high performance computing architectures (e.g: distributed processing and GPUs), collaborative astronomy visualization, the use of workflow systems to store metadata about visualization parameters, and the use of advanced interaction devices. We examine a number of issues that may be limiting the spread of scientific visualization research in astronomy and identify six grand challenges for scientific visualization research in the Petascale Astronomy Era.
We propose a concise approximate description, and a method for efficiently obtaining this description, via adaptive random sampling of the performance (running time, memory consumption, or any other profileable numerical quantity) of a given algorith m on some low-dimensional rectangular grid of inputs. The formal correctness is proven under reasonable assumptions on the algorithm under consideration; and the approachs practical benefit is demonstrated by predicting for which observer positions and viewing directions an occlusion culling algorithm yields a net performance benefit or loss compared to a simple brute force renderer.
We present RealitySketch, an augmented reality interface for sketching interactive graphics and visualizations. In recent years, an increasing number of AR sketching tools enable users to draw and embed sketches in the real world. However, with the c urrent tools, sketched contents are inherently static, floating in mid air without responding to the real world. This paper introduces a new way to embed dynamic and responsive graphics in the real world. In RealitySketch, the user draws graphical elements on a mobile AR screen and binds them with physical objects in real-time and improvisational ways, so that the sketched elements dynamically move with the corresponding physical motion. The user can also quickly visualize and analyze real-world phenomena through responsive graph plots or interactive visualizations. This paper contributes to a set of interaction techniques that enable capturing, parameterizing, and visualizing real-world motion without pre-defined programs and configurations. Finally, we demonstrate our tool with several application scenarios, including physics education, sports training, and in-situ tangible interfaces.
We introduce ThreeDWorld (TDW), a platform for interactive multi-modal physical simulation. With TDW, users can simulate high-fidelity sensory data and physical interactions between mobile agents and objects in a wide variety of rich 3D environments. TDW has several unique properties: 1) realtime near photo-realistic image rendering quality; 2) a library of objects and environments with materials for high-quality rendering, and routines enabling user customization of the asset library; 3) generative procedures for efficiently building classes of new environments 4) high-fidelity audio rendering; 5) believable and realistic physical interactions for a wide variety of material types, including cloths, liquid, and deformable objects; 6) a range of avatar types that serve as embodiments of AI agents, with the option for user avatar customization; and 7) support for human interactions with VR devices. TDW also provides a rich API enabling multiple agents to interact within a simulation and return a range of sensor and physics data representing the state of the world. We present initial experiments enabled by the platform around emerging research directions in computer vision, machine learning, and cognitive science, including multi-modal physical scene understanding, multi-agent interactions, models that learn like a child, and attention studies in humans and neural networks. The simulation platform will be made publicly available.
Virtual and augmented reality (VR/AR) displays strive to provide a resolution, framerate and field of view that matches the perceptual capabilities of the human visual system, all while constrained by limited compute budgets and transmission bandwidt hs of wearable computing systems. Foveated graphics techniques have emerged that could achieve these goals by exploiting the falloff of spatial acuity in the periphery of the visual field. However, considerably less attention has been given to temporal aspects of human vision, which also vary across the retina. This is in part due to limitations of current eccentricity-dependent models of the visual system. We introduce a new model, experimentally measuring and computationally fitting eccentricity-dependent critical flicker fusion thresholds jointly for both space and time. In this way, our model is unique in enabling the prediction of temporal information that is imperceptible for a certain spatial frequency, eccentricity, and range of luminance levels. We validate our model with an image quality user study, and use it to predict potential bandwidth savings 7x higher than those afforded by current spatial-only foveated models. As such, this work forms the enabling foundation for new temporally foveated graphics techniques.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا