ترغب بنشر مسار تعليمي؟ اضغط هنا

Never Drop the Ball in the Operating Room: An efficient hand-based VR HMD controller interpolation algorithm, for collaborative, networked virtual environments

57   0   0.0 ( 0 )
 نشر من قبل Manos Kamarianakis
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this work, we propose two algorithms that can be applied in the context of a networked virtual environment to efficiently handle the interpolation of displacement data for hand-based VR HMDs. Our algorithms, based on the use of dual-quaternions and multivectors respectively, impact the network consumption rate and are highly effective in scenarios involving multiple users. We illustrate convincing results in a modern game engine and a medical VR collaborative training scenario.

قيم البحث

اقرأ أيضاً

Providing a depth-rich Virtual Reality (VR) experience to users without causing discomfort remains to be a challenge with todays commercially available head-mounted displays (HMDs), which enforce strict measures on stereoscopic camera parameters for the sake of keeping visual discomfort to a minimum. However, these measures often lead to an unimpressive VR experience with shallow depth feeling. We propose the first method ready to be used with existing consumer HMDs for automated stereoscopic camera control in virtual environments (VEs). Using radial basis function interpolation and projection matrix manipulations, our method makes it possible to significantly enhance user experience in terms of overall perceived depth while maintaining visual discomfort on a par with the default arrangement. In our implementation, we also introduce the first immersive interface for authoring a unique 3D stereoscopic cinematography for any VE to be experienced with consumer HMDs. We conducted a user study that demonstrates the benefits of our approach in terms of superior picture quality and perceived depth. We also investigated the effects of using depth of field (DoF) in combination with our approach and observed that the addition of our DoF implementation was seen as a degraded experience, if not similar.
In this work, we present an integrated geometric framework: deep- cut that enables for the first time a user to geometrically and algorithmically cut, tear and drill the surface of a skinned model without prior constraints, layered on top of a custom soft body mesh deformation algorithm. Both layered algorithms in this frame- work yield real-time results and are amenable for mobile Virtual Reality, in order to be utilized in a variety of interactive application scenarios. Our framework dramatically improves real-time user experience and task performance in VR, without pre-calculated or artificially designed cuts, tears, drills or surface deformations via predefined rigged animations, which is the current state-of-the-art in mobile VR. Thus our framework improves user experience on one hand, on the other hand saves both time and costs from expensive, manual, labour-intensive design pre-calculation stages.
In this work, a new and innovative way of spatial computing that appeared recently in the bibliography called True Augmented Reality (AR), is employed in cultural heritage preservation. This innovation could be adapted by the Virtual Museums of the f uture to enhance the quality of experience. It emphasises, the fact that a visitor will not be able to tell, at a first glance, if the artefact that he/she is looking at is real or not and it is expected to draw the visitors interest. True AR is not limited to artefacts but extends even to buildings or life-sized character simulations of statues. It provides the best visual quality possible so that the users will not be able to tell the real objects from the augmented ones. Such applications can be beneficial for future museums, as with True AR, 3D models of various exhibits, monuments, statues, characters and buildings can be reconstructed and presented to the visitors in a realistic and innovative way. We also propose our Virtual Reality Sample application, a True AR playground featuring basic components and tools for generating interactive Virtual Museum applications, alongside a 3D reconstructed character (the priest of Asinou church) facilitating the storyteller of the augmented experience.
115 - Jingbo Zhao , Ruize An , Ruolin Xu 2021
Hand gesture is a new and promising interface for locomotion in virtual environments. While several previous studies have proposed different hand gestures for virtual locomotion, little is known about their differences in terms of performance and use r preference in virtual locomotion tasks. In the present paper, we presented three different hand gesture interfaces and their algorithms for locomotion, which are called the Finger Distance gesture, the Finger Number gesture and the Finger Tapping gesture. These gestures were inspired by previous studies of gesture-based locomotion interfaces and are typical gestures that people are familiar with in their daily lives. Implementing these hand gesture interfaces in the present study enabled us to systematically compare the differences between these gestures. In addition, to compare the usability of these gestures to locomotion interfaces using gamepads, we also designed and implemented a gamepad interface based on the Xbox One controller. We compared these four interfaces through two virtual locomotion tasks. These tasks assessed their performance and user preference on speed control and waypoints navigation. Results showed that user preference and performance of the Finger Distance gesture were comparable to that of the gamepad interface. The Finger Number gesture also had close performance and user preference to that of the Finger Distance gesture. Our study demonstrates that the Finger Distance gesture and the Finger Number gesture are very promising interfaces for virtual locomotion. We also discuss that the Finger Tapping gesture needs further improvements before it can be used for virtual walking.
RoomShift is a room-scale dynamic haptic environment for virtual reality, using a small swarm of robots that can move furniture. RoomShift consists of nine shape-changing robots: Roombas with mechanical scissor lifts. These robots drive beneath a pie ce of furniture to lift, move and place it. By augmenting virtual scenes with physical objects, users can sit on, lean against, place and otherwise interact with furniture with their whole body; just as in the real world. When the virtual scene changes or users navigate within it, the swarm of robots dynamically reconfigures the physical environment to match the virtual content. We describe the hardware and software implementation, applications in virtual tours and architectural design and interaction techniques.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا