ترغب بنشر مسار تعليمي؟ اضغط هنا

E0102-VR: exploring the scientific potential of Virtual Reality for observational astrophysics

97   0   0.0 ( 0 )
 نشر من قبل Fr\\'ed\\'eric P.A. Vogt
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Virtual Reality (VR) technology has been subject to a rapid democratization in recent years, driven in large by the entertainment industry, and epitomized by the emergence of consumer-grade, plug-and-play, room-scale VR devices. To explore the scientific potential of this technology for the field of observational astrophysics, we have created an experimental VR application: E0102-VR. The specific scientific goal of this application is to facilitate the characterization of the 3D structure of the oxygen-rich ejecta in the young supernova remnant 1E 0102.2-7219 in the Small Magellanic Cloud. Using E0102-VR, we measure the physical size of two large cavities in the system, including a (7.0$pm$0.5) pc-long funnel structure on the far-side of the remnant. The E0102-VR application, albeit experimental, demonstrates the benefits of using human depth perception for a rapid and accurate characterization of complex 3D structures. Given the implementation costs (time-wise) of a dedicated VR application like E0102-VR, we conclude that the future of VR for scientific purposes in astrophysics most likely resides in the development of a robust, generic application dedicated to the exploration and visualization of 3D observational datasets, akin to a ``ds9-VR.



قيم البحث

اقرأ أيضاً

110 - C.J. Fluke 2018
Spherical coordinate systems, which are ubiquitous in astronomy, cannot be shown without distortion on flat, two-dimensional surfaces. This poses challenges for the two complementary phases of visual exploration -- making discoveries in data by looki ng for relationships, patterns or anomalies -- and publication -- where the results of an exploration are made available for scientific scrutiny or communication. This is a long-standing problem, and many practical solutions have been developed. Our allskyVR approach provides a workflow for experimentation with commodity virtual reality head-mounted displays. Using the free, open source S2PLOT programming library, and the A-Frame WebVR browser-based framework, we provide a straightforward way to visualise all-sky catalogues on a user-centred, virtual celestial sphere. The allskyVR distribution contains both a quickstart option, complete with a gaze-based menu system, and a fully customisable mode for those who need more control of the immersive experience. The software is available for download from: https://github.com/cfluke/allskyVR
Scientists across all disciplines increasingly rely on machine learning algorithms to analyse and sort datasets of ever increasing volume and complexity. Although trends and outliers are easily extracted, careful and close inspection will still be ne cessary to explore and disentangle detailed behavior, as well as identify systematics and false positives. We must therefore incorporate new technologies to facilitate scientific analysis and exploration. Astrophysical data is inherently multi-parameter, with the spatial-kinematic dimensions at the core of observations and simulations. The arrival of mainstream virtual-reality (VR) headsets and increased GPU power, as well as the availability of versatile development tools for video games, has enabled scientists to deploy such technology to effectively interrogate and interact with complex data. In this paper we present development and results from custom-built interactive VR tools, called the iDaVIE suite, that are informed and driven by research on galaxy evolution, cosmic large-scale structure, galaxy-galaxy interactions, and gas/kinematics of nearby galaxies in survey and targeted observations. In the new era of Big Data ushered in by major facilities such as the SKA and LSST that render past analysis and refinement methods highly constrained, we believe that a paradigm shift to new software, technology and methods that exploit the power of visual perception, will play an increasingly important role in bridging the gap between statistical metrics and new discovery. We have released a beta version of the iDaVIE software system that is free and open to the community.
158 - W.Atwood , A. Albert , L. Baldini 2013
The event selection developed for the Fermi Large Area Telescope before launch has been periodically updated to reflect the constantly improving knowledge of the detector and the environment in which it operates. Pass 7, released to the public in Aug ust 2011, represents the most recent major iteration of this incremental process. In parallel, the LAT team has undertaken a coherent long-term effort aimed at a radical revision of the entire event-level analysis, based on the experience gained in the prime phase of the mission. This includes virtually every aspect of the data reduction process, from the simulation of the detector to the event reconstruction and the background rejection. The potential improvements include (but are not limited to) a significant reduction in background contamination coupled with an increased effective area, a better point-spread function, a better understanding of the systematic uncertainties and an extension of the energy reach for the photon analysis below 100 MeV and above a few hundred GeV. We present an overview of the work that has been done or is ongoing and the prospects for the near future.
Wireless Virtual Reality (VR) users are able to enjoy immersive experience from anywhere at anytime. However, providing full spherical VR video with high quality under limited VR interaction latency is challenging. If the viewpoint of the VR user can be predicted in advance, only the required viewpoint is needed to be rendered and delivered, which can reduce the VR interaction latency. Therefore, in this paper, we use offline and online learning algorithms to predict viewpoint of the VR user using real VR dataset. For the offline learning algorithm, the trained learning model is directly used to predict the viewpoint of VR users in continuous time slots. While for the online learning algorithm, based on the VR users actual viewpoint delivered through uplink transmission, we compare it with the predicted viewpoint and update the parameters of the online learning algorithm to further improve the prediction accuracy. To guarantee the reliability of the uplink transmission, we integrate the Proactive retransmission scheme into our proposed online learning algorithm. Simulation results show that our proposed online learning algorithm for uplink wireless VR network with the proactive retransmission scheme only exhibits about 5% prediction error.
With the popularity of online access in virtual reality (VR) devices, it will become important to investigate exclusive and interactive CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) designs for VR devices. In th is paper, we first present four traditional two-dimensional (2D) CAPTCHAs (i.e., text-based, image-rotated, image-puzzled, and image-selected CAPTCHAs) in VR. Then, based on the three-dimensional (3D) interaction characteristics of VR devices, we propose two vrCAPTCHA design prototypes (i.e., task-driven and bodily motion-based CAPTCHAs). We conducted a user study with six participants for exploring the feasibility of our two vrCAPTCHAs and traditional CAPTCHAs in VR. We believe that our two vrCAPTCHAs can be an inspiration for the further design of CAPTCHAs in VR.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا