Do you want to publish a course? Click here

Mid-Air Drawing of Curves on 3D Surfaces in Virtual Reality

157   0   0.0 ( 0 )
 Added by Rahul Arora
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Complex 3D curves can be created by directly drawing mid-air in immersive environments (Augmented and Virtual Realities). Drawing mid-air strokes precisely on the surface of a 3D virtual object, however, is difficult; necessitating a projection of the mid-air stroke onto the user intended surface curve. We present the first detailed investigation of the fundamental problem of 3D stroke projection in VR. An assessment of the design requirements of real-time drawing of curves on 3D objects in VR is followed by the definition and classification of multiple techniques for 3D stroke projection. We analyze the advantages and shortcomings of these approaches both theoretically and via practical pilot testing. We then formally evaluate the two most promising techniques spraycan and mimicry with 20 users in VR. The study shows a strong qualitative and quantitative user preference for our novel stroke mimicry projection algorithm. We further illustrate the effectiveness and utility of stroke mimicry, to draw complex 3D curves on surfaces for various artistic and functional design applications.



rate research

Read More

Providing a depth-rich Virtual Reality (VR) experience to users without causing discomfort remains to be a challenge with todays commercially available head-mounted displays (HMDs), which enforce strict measures on stereoscopic camera parameters for the sake of keeping visual discomfort to a minimum. However, these measures often lead to an unimpressive VR experience with shallow depth feeling. We propose the first method ready to be used with existing consumer HMDs for automated stereoscopic camera control in virtual environments (VEs). Using radial basis function interpolation and projection matrix manipulations, our method makes it possible to significantly enhance user experience in terms of overall perceived depth while maintaining visual discomfort on a par with the default arrangement. In our implementation, we also introduce the first immersive interface for authoring a unique 3D stereoscopic cinematography for any VE to be experienced with consumer HMDs. We conducted a user study that demonstrates the benefits of our approach in terms of superior picture quality and perceived depth. We also investigated the effects of using depth of field (DoF) in combination with our approach and observed that the addition of our DoF implementation was seen as a degraded experience, if not similar.
Traditional cinematography has relied for over a century on a well-established set of editing rules, called continuity editing, to create a sense of situational continuity. Despite massive changes in visual content across cuts, viewers in general experience no trouble perceiving the discontinuous flow of information as a coherent set of events. However, Virtual Reality (VR) movies are intrinsically different from traditional movies in that the viewer controls the camera orientation at all times. As a consequence, common editing techniques that rely on camera orientations, zooms, etc., cannot be used. In this paper we investigate key relevant questions to understand how well traditional movie editing carries over to VR. To do so, we rely on recent cognition studies and the event segmentation theory, which states that our brains segment continuous actions into a series of discrete, meaningful events. We first replicate one of these studies to assess whether the predictions of such theory can be applied to VR. We next gather gaze data from viewers watching VR videos containing different edits with varying parameters, and provide the first systematic analysis of viewers behavior and the perception of continuity in VR. From this analysis we make a series of relevant findings; for instance, our data suggests that predictions from the cognitive event segmentation theory are useful guides for VR editing; that different types of edits are equally well understood in terms of continuity; and that spatial misalignments between regions of interest at the edit boundaries favor a more exploratory behavior even after viewers have fixated on a new region of interest. In addition, we propose a number of metrics to describe viewers attentional behavior in VR. We believe the insights derived from our work can be useful as guidelines for VR content creation.
Applications of the Extended Reality (XR) spectrum, a superset of Mixed, Augmented and Virtual Reality, are gaining prominence and can be employed in a variety of areas, such as virtual museums. Examples can be found in the areas of education, cultural heritage, health/treatment, entertainment, marketing, and more. The majority of computer graphics applications nowadays are used to operate only in one of the above realities. The lack of applications across the XR spectrum is a real shortcoming. There are many advantages resulting from this problems solution. Firstly, releasing an application across the XR spectrum could contribute in discovering its most suitable reality. Moreover, an application could be more immersive within a particular reality, depending on its context. Furthermore, its availability increases to a broader range of users. For instance, if an application is released both in Virtual and Augmented Reality, it is accessible to users that may lack the possession of a VR headset, but not of a mobile AR device. The question that arises at this point, would be Is it possible for a full s/w application stack to be converted across XR without sacrificing UI/UX in a semi-automatic way?. It may be quite difficult, depending on the architecture and application implementation. Most companies nowadays support only one reality, due to their lack of UI/UX software architecture or resources to support the complete XR spectrum. In this work, we present an automatic reality transition in the context of virtual museum applications. We propose a development framework, which will automatically allow this XR transition. This framework transforms any XR project into different realities such as Augmented or Virtual. It also reduces the development time while increasing the XR availability of 3D applications, encouraging developers to release applications across the XR spectrum.
Traditional high-quality 3D graphics requires large volumes of fine-detailed scene data for rendering. This demand compromises computational efficiency and local storage resources. Specifically, it becomes more concerning for future wearable and portable virtual and augmented reality (VR/AR) displays. Recent approaches to combat this problem include remote rendering/streaming and neural representations of 3D assets. These approaches have redefined the traditional local storage-rendering pipeline by distributed computing or compression of large data. However, these methods typically suffer from high latency or low quality for practical visualization of large immersive virtual scenes, notably with extra high resolution and refresh rate requirements for VR applications such as gaming and design. Tailored for the future portable, low-storage, and energy-efficient VR platforms, we present the first gaze-contingent 3D neural representation and view synthesis method. We incorporate the human psychophysics of visual- and stereo-acuity into an egocentric neural representation of 3D scenery. Furthermore, we jointly optimize the latency/performance and visual quality, while mutually bridging human perception and neural scene synthesis, to achieve perceptually high-quality immersive interaction. Both objective analysis and subjective study demonstrate the effectiveness of our approach in significantly reducing local storage volume and synthesis latency (up to 99% reduction in both data size and computational time), while simultaneously presenting high-fidelity rendering, with perceptual quality identical to that of fully locally stored and rendered high-quality imagery.
In this work, a new and innovative way of spatial computing that appeared recently in the bibliography called True Augmented Reality (AR), is employed in cultural heritage preservation. This innovation could be adapted by the Virtual Museums of the future to enhance the quality of experience. It emphasises, the fact that a visitor will not be able to tell, at a first glance, if the artefact that he/she is looking at is real or not and it is expected to draw the visitors interest. True AR is not limited to artefacts but extends even to buildings or life-sized character simulations of statues. It provides the best visual quality possible so that the users will not be able to tell the real objects from the augmented ones. Such applications can be beneficial for future museums, as with True AR, 3D models of various exhibits, monuments, statues, characters and buildings can be reconstructed and presented to the visitors in a realistic and innovative way. We also propose our Virtual Reality Sample application, a True AR playground featuring basic components and tools for generating interactive Virtual Museum applications, alongside a 3D reconstructed character (the priest of Asinou church) facilitating the storyteller of the augmented experience.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا