ترغب بنشر مسار تعليمي؟ اضغط هنا

Dynamic Difficulty Adjustment in Virtual Reality Exergames through Experience-driven Procedural Content Generation

276   0   0.0 ( 0 )
 نشر من قبل Tobias Huber
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Virtual Reality (VR) games that feature physical activities have been shown to increase players motivation to do physical exercise. However, for such exercises to have a positive healthcare effect, they have to be repeated several times a week. To maintain player motivation over longer periods of time, games often employ Dynamic Difficulty Adjustment (DDA) to adapt the games challenge according to the players capabilities. For exercise games, this is mostly done by tuning specific in-game parameters like the speed of objects. In this work, we propose to use experience-driven Procedural Content Generation for DDA in VR exercise games by procedurally generating levels that match the players current capabilities. Not only finetuning specific parameters but creating completely new levels has the potential to decrease repetition over longer time periods and allows for the simultaneous adaptation of the cognitive and physical challenge of the exergame. As a proof-of-concept, we implement an initial prototype in which the player must traverse a maze that includes several exercise rooms, whereby the generation of the maze is realized by a neural network. Passing those exercise rooms requires the player to perform physical activities. To match the players capabilities, we use Deep Reinforcement Learning to adjust the structure of the maze and to decide which exercise rooms to include in the maze. We evaluate our prototype in an exploratory user study utilizing both biodata and subjective questionnaires.

قيم البحث

اقرأ أيضاً

Purpose: Preliminarily evaluate the feasibility and efficacy of using meditative virtual reality (VR) to improve the hospital experience of intensive care unit (ICU) patients. Methods: Effects of VR were examined in a non-randomized, single-center cohort. Fifty-nine patients admitted to the surgical or trauma ICU of the University of Florida Health Shands Hospital participated. A Google Daydream headset was used to expose ICU patients to commercially available VR applications focused on calmness and relaxation (Google Spotlight Stories and RelaxVR). Sessions were conducted once daily for up to seven days. Outcome measures included pain level, anxiety, depression, medication administration, sleep quality, heart rate, respiratory rate, blood pressure, delirium status, and patient ratings of the VR system. Comparisons were made using paired t-tests and mixed models where appropriate. Results: The VR meditative intervention was found to improve patients ICU experience with reduced levels of anxiety and depression; however, there was no evidence suggesting that VR had any significant effects on physiological measures, pain, or sleep. Conclusion: The use of VR technology in the ICU was shown to be easily implemented and well-received by patients.
The recent rise of interest in Virtual Reality (VR) came with the availability of commodity commercial VR prod- ucts, such as the Head Mounted Displays (HMD) created by Oculus and other vendors. To accelerate the user adoption of VR headsets, content providers should focus on producing high quality immersive content for these devices. Similarly, multimedia streaming service providers should enable the means to stream 360 VR content on their platforms. In this study, we try to cover different aspects related to VR content representation, streaming, and quality assessment that will help establishing the basic knowledge of how to build a VR streaming system.
Shannons Index of Difficulty ($SID$), a logarithmic relation between movement-amplitude and target-width, is reputable for modelling movement-time in pointing tasks. However, it cannot resolve the inherent speed-accuracy trade-off, where emphasizing accuracy compromises speed and vice versa. Effective target-width is considered as spatial adjustment, compensating for accuracy. However, for compensating speed, no significant adjustment exists in the literature. Real-life pointing tasks are both spatially and temporally unconstrained. Spatial adjustment alone is insufficient for modelling these tasks due to several human factors. To resolve this, we propose $ANTASID$ (A Novel Temporal Adjustment to $SID$) formulation with detailed performance analysis. We hypothesized temporal efficiency of interaction as a potential temporal adjustment factor ($t$), compensating for speed. Considering spatial and/or temporal adjustments to $SID$, we conducted regression analyses using our own and benchmark datasets in both controlled and uncontrolled scenarios. The $ANTASID$ formulation showed significantly superior fitness values and throughput in all the scenarios.
225 - Luis Valente 2016
This paper proposes the concept of live-action virtual reality games as a new genre of digital games based on an innovative combination of live-action, mixed-reality, context-awareness, and interaction paradigms that comprise tangible objects, contex t-aware input devices, and embedded/embodied interactions. Live-action virtual reality games are live-action games because a player physically acts out (using his/her real body and senses) his/her avatar (his/her virtual representation) in the game stage, which is the mixed-reality environment where the game happens. The game stage is a kind of augmented virtuality; a mixed-reality where the virtual world is augmented with real-world information. In live-action virtual reality games, players wear HMD devices and see a virtual world that is constructed using the physical world architecture as the basic geometry and context information. Physical objects that reside in the physical world are also mapped to virtual elements. Live-action virtual reality games keeps the virtual and real-worlds superimposed, requiring players to physically move in the environment and to use different interaction paradigms (such as tangible and embodied interaction) to complete game activities. This setup enables the players to touch physical architectural elements (such as walls) and other objects, feeling the game stage. Players have free movement and may interact with physical objects placed in the game stage, implicitly and explicitly. Live-action virtual reality games differ from similar game concepts because they sense and use contextual information to create unpredictable game experiences, giving rise to emergent gameplay.
We present PhyShare, a new haptic user interface based on actuated robots. Virtual reality has recently been gaining wide adoption, and an effective haptic feedback in these scenarios can strongly support users sensory in bridging virtual and physica l world. Since participants do not directly observe these robotic proxies, we investigate the multiple mappings between physical robots and virtual proxies that can utilize the resources needed to provide a well rounded VR experience. PhyShare bots can act either as directly touchable objects or invisible carriers of physical objects, depending on different scenarios. They also support distributed collaboration, allowing remotely located VR collaborators to share the same physical feedback.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا