ترغب بنشر مسار تعليمي؟ اضغط هنا

Level of Presence in Team-Building Activities: Gaming Component in Virtual Environments

217   0   0.0 ( 0 )
 نشر من قبل Gianluca De Leo
 تاريخ النشر 2011
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Historically the training of teams has been implemented using a face-to-face approach. In the past decade, on-line multiuser virtual environments have offered a solution for training teams whose members are geographically dispersed. In order to develop on effective team training activity, a high sense of presence among the participant needs to be reached. Previous research studies reported being able to reach a high level of presence even when using inexpensive technology such as laptop and headset. This study evaluates the level of presence of ten subjects who have to perform a team-building activity in a multi-user virtual environment using a laptop computer and a headset. The authors are interested in determining which user characterizes, such as gender, age and knowledge of computers, have a strong correlation with the level of sense of presence. The results of this study showed that female participants were more likely to engage in the activity and perceived fewer negative effects. Participants who reported less negative effects such as feeling tired, dizzy, or experiencing eye strain during the team-building activity reached a higher level of sense of presence.

قيم البحث

اقرأ أيضاً

Haptic sensory feedback has been shown to complement the visual and auditory senses, improve user performance and provide a greater sense of togetherness in collaborative and interactive virtual environments. However, we are faced with numerous chall enges when deploying these systems over the present day Internet. The most significant of these challenges are the network performance limitations of the Wide Area Networks. In this paper, we offer a structured examination of the current challenges in the deployment of haptic-based distributed systems by analyzing the recent advances in the understanding of these challenges and the progress that has been made to overcome them.
We introduce BEHAVIOR, a benchmark for embodied AI with 100 activities in simulation, spanning a range of everyday household chores such as cleaning, maintenance, and food preparation. These activities are designed to be realistic, diverse, and compl ex, aiming to reproduce the challenges that agents must face in the real world. Building such a benchmark poses three fundamental difficulties for each activity: definition (it can differ by time, place, or person), instantiation in a simulator, and evaluation. BEHAVIOR addresses these with three innovations. First, we propose an object-centric, predicate logic-based description language for expressing an activitys initial and goal conditions, enabling generation of diverse instances for any activity. Second, we identify the simulator-agnostic features required by an underlying environment to support BEHAVIOR, and demonstrate its realization in one such simulator. Third, we introduce a set of metrics to measure task progress and efficiency, absolute and relative to human demonstrators. We include 500 human demonstrations in virtual reality (VR) to serve as the human ground truth. Our experiments demonstrate that even state of the art embodied AI solutions struggle with the level of realism, diversity, and complexity imposed by the activities in our benchmark. We make BEHAVIOR publicly available at behavior.stanford.edu to facilitate and calibrate the development of new embodied AI solutions.
Virtual Reality (VR) provides immersive experiences in the virtual world, but it may reduce users awareness of physical surroundings and cause safety concerns and psychological discomfort. Hence, there is a need of an ambient information design to in crease users situational awareness (SA) of physical elements when they are immersed in VR environment. This is challenging, since there is a tradeoff between the awareness in reality and the interference with users experience in virtuality. In this paper, we design five representations (indexical, symbolic, and iconic with three emotions) based on two dimensions (vividness and emotion) to address the problem. We conduct an empirical study to evaluate participants SA, perceived breaks in presence (BIPs), and perceived engagement through VR tasks that require movement in space. Results show that designs with higher vividness evoke more SA, designs that are more consistent with the virtual environment can mitigate the BIP issue, and emotion-evoking designs are more engaging.
115 - Jingbo Zhao , Ruize An , Ruolin Xu 2021
Hand gesture is a new and promising interface for locomotion in virtual environments. While several previous studies have proposed different hand gestures for virtual locomotion, little is known about their differences in terms of performance and use r preference in virtual locomotion tasks. In the present paper, we presented three different hand gesture interfaces and their algorithms for locomotion, which are called the Finger Distance gesture, the Finger Number gesture and the Finger Tapping gesture. These gestures were inspired by previous studies of gesture-based locomotion interfaces and are typical gestures that people are familiar with in their daily lives. Implementing these hand gesture interfaces in the present study enabled us to systematically compare the differences between these gestures. In addition, to compare the usability of these gestures to locomotion interfaces using gamepads, we also designed and implemented a gamepad interface based on the Xbox One controller. We compared these four interfaces through two virtual locomotion tasks. These tasks assessed their performance and user preference on speed control and waypoints navigation. Results showed that user preference and performance of the Finger Distance gesture were comparable to that of the gamepad interface. The Finger Number gesture also had close performance and user preference to that of the Finger Distance gesture. Our study demonstrates that the Finger Distance gesture and the Finger Number gesture are very promising interfaces for virtual locomotion. We also discuss that the Finger Tapping gesture needs further improvements before it can be used for virtual walking.
We developed a novel assessment platform with untethered virtual reality, 3-dimensional sounds, and pressure sensing floor mat to help assess the walking balance and negotiation of obstacles given diverse sensory load and/or cognitive load. The platf orm provides an immersive 3D city-like scene with anticipated/unanticipated virtual obstacles. Participants negotiate the obstacles with perturbations of: auditory load by spatial audio, cognitive load by a memory task, and visual flow by generated by avatars movements at various amounts and speeds. A VR headset displays the scenes while providing real-time position and orientation of the participants head. A pressure-sensing walkway senses foot pressure and visualizes it in a heatmap. The system helps to assess walking balance via pressure dynamics per foot, success rate of crossing obstacles, available response time as well as head kinematics in response to obstacles and multitasking. Based on the assessment, specific balance training and fall prevention program can be prescribed.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا