Do you want to publish a course? Click here

An XR rapid prototyping framework for interoperability across the reality spectrum

102   0   0.0 ( 0 )
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Applications of the Extended Reality (XR) spectrum, a superset of Mixed, Augmented and Virtual Reality, are gaining prominence and can be employed in a variety of areas, such as virtual museums. Examples can be found in the areas of education, cultural heritage, health/treatment, entertainment, marketing, and more. The majority of computer graphics applications nowadays are used to operate only in one of the above realities. The lack of applications across the XR spectrum is a real shortcoming. There are many advantages resulting from this problems solution. Firstly, releasing an application across the XR spectrum could contribute in discovering its most suitable reality. Moreover, an application could be more immersive within a particular reality, depending on its context. Furthermore, its availability increases to a broader range of users. For instance, if an application is released both in Virtual and Augmented Reality, it is accessible to users that may lack the possession of a VR headset, but not of a mobile AR device. The question that arises at this point, would be Is it possible for a full s/w application stack to be converted across XR without sacrificing UI/UX in a semi-automatic way?. It may be quite difficult, depending on the architecture and application implementation. Most companies nowadays support only one reality, due to their lack of UI/UX software architecture or resources to support the complete XR spectrum. In this work, we present an automatic reality transition in the context of virtual museum applications. We propose a development framework, which will automatically allow this XR transition. This framework transforms any XR project into different realities such as Augmented or Virtual. It also reduces the development time while increasing the XR availability of 3D applications, encouraging developers to release applications across the XR spectrum.



rate research

Read More

Computational medical XR (extended reality) unifies the computer science applications of intelligent reality, medical virtual reality, medical augmented reality and spatial computing for medical training, planning and navigation content creation. It builds upon clinical XR by bringing on novel low-code/no-code XR authoring platforms, suitable for medical professionals as well as XR content creators.
Thanks to recent advancements in the technology, eXtended Reality (XR) applications are gaining a lot of momentum, and they will surely become increasingly popular in the next decade. These new applications, however, require a step forward also in terms of models to simulate and analyze this type of traffic sources in modern communication networks, in order to guarantee to the users state of the art performance and Quality of Experience (QoE). Recognizing this need, in this work, we present a novel open-source traffic model, which researchers can use as a starting point both for improvements of the model itself and for the design of optimized algorithms for the transmission of these peculiar data flows. Along with the mathematical model and the code, we also share with the community the traces that we gathered for our study, collected from freely available applications such as Minecraft VR, Google Earth VR, and Virus Popper. Finally, we propose a roadmap for the construction of an end-to-end framework that fills this gap in the current state of the art.
156 - Rahul Arora , Karan Singh 2020
Complex 3D curves can be created by directly drawing mid-air in immersive environments (Augmented and Virtual Realities). Drawing mid-air strokes precisely on the surface of a 3D virtual object, however, is difficult; necessitating a projection of the mid-air stroke onto the user intended surface curve. We present the first detailed investigation of the fundamental problem of 3D stroke projection in VR. An assessment of the design requirements of real-time drawing of curves on 3D objects in VR is followed by the definition and classification of multiple techniques for 3D stroke projection. We analyze the advantages and shortcomings of these approaches both theoretically and via practical pilot testing. We then formally evaluate the two most promising techniques spraycan and mimicry with 20 users in VR. The study shows a strong qualitative and quantitative user preference for our novel stroke mimicry projection algorithm. We further illustrate the effectiveness and utility of stroke mimicry, to draw complex 3D curves on surfaces for various artistic and functional design applications.
The GAPS (Global Architecture of Planetary Systems) project is a, mainly Italian, effort for the comprehensive characterization of the architectural properties of planetary systems as a function of the host stars characteristics by using radial velocities technique. Since the beginning (2012) the project exploited the HARPS-N high resolution optical spectrograph mounted at the 4-m class TNG telescope in La Palma (Canary Islands). More recently, with the upgrade of the TNG near-infrared spectrograph GIANO-B, obtained in the framework of the GIARPS project, it has become possible to perform simultaneous observations with these two instruments, providing thus, at the same time, data both in the optical and in the near-infrared range. The large amount of data obtained in about 5 years of observations provided various scientific outputs, and among them, time series of radial velocity (RV) profiles of the investigated stellar systems. This contribution shows the first steps undertaken to deploy the GAPS Time Series as an interoperable resource within the VO framework designed by the IVOA. This effort has thus a double goal. On one side theres the aim at making the time series data (from RV up to their originating spectra) available to the general astrophysical community in an interoperable way. On the other side, to provide use cases and a prototyping base to the ongoing time domain priority effort at the IVOA level. Time series dataset discovery, depicted through use cases and mapped against the ObsCore model will be shown, highlighting commonalities as well as missing metadata requirements. Future development steps and criticalities, related also to the joint discovery and access of datasets provided by both the spectrographs operated side by side, will be summarized.
Providing a depth-rich Virtual Reality (VR) experience to users without causing discomfort remains to be a challenge with todays commercially available head-mounted displays (HMDs), which enforce strict measures on stereoscopic camera parameters for the sake of keeping visual discomfort to a minimum. However, these measures often lead to an unimpressive VR experience with shallow depth feeling. We propose the first method ready to be used with existing consumer HMDs for automated stereoscopic camera control in virtual environments (VEs). Using radial basis function interpolation and projection matrix manipulations, our method makes it possible to significantly enhance user experience in terms of overall perceived depth while maintaining visual discomfort on a par with the default arrangement. In our implementation, we also introduce the first immersive interface for authoring a unique 3D stereoscopic cinematography for any VE to be experienced with consumer HMDs. We conducted a user study that demonstrates the benefits of our approach in terms of superior picture quality and perceived depth. We also investigated the effects of using depth of field (DoF) in combination with our approach and observed that the addition of our DoF implementation was seen as a degraded experience, if not similar.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا