Do you want to publish a course? Click here

Comparing Hand Gestures and the Gamepad Interfaces for Locomotion in Virtual Environments

116   0   0.0 ( 0 )
 Added by Jingbo Zhao
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Hand gesture is a new and promising interface for locomotion in virtual environments. While several previous studies have proposed different hand gestures for virtual locomotion, little is known about their differences in terms of performance and user preference in virtual locomotion tasks. In the present paper, we presented three different hand gesture interfaces and their algorithms for locomotion, which are called the Finger Distance gesture, the Finger Number gesture and the Finger Tapping gesture. These gestures were inspired by previous studies of gesture-based locomotion interfaces and are typical gestures that people are familiar with in their daily lives. Implementing these hand gesture interfaces in the present study enabled us to systematically compare the differences between these gestures. In addition, to compare the usability of these gestures to locomotion interfaces using gamepads, we also designed and implemented a gamepad interface based on the Xbox One controller. We compared these four interfaces through two virtual locomotion tasks. These tasks assessed their performance and user preference on speed control and waypoints navigation. Results showed that user preference and performance of the Finger Distance gesture were comparable to that of the gamepad interface. The Finger Number gesture also had close performance and user preference to that of the Finger Distance gesture. Our study demonstrates that the Finger Distance gesture and the Finger Number gesture are very promising interfaces for virtual locomotion. We also discuss that the Finger Tapping gesture needs further improvements before it can be used for virtual walking.



rate research

Read More

Designing of touchless user interface is gaining popularity in various contexts. Using such interfaces, users can interact with electronic devices even when the hands are dirty or non-conductive. Also, user with partial physical disability can interact with electronic devices using such systems. Research in this direction has got major boost because of the emergence of low-cost sensors such as Leap Motion, Kinect or RealSense devices. In this paper, we propose a Leap Motion controller-based methodology to facilitate rendering of 2D and 3D shapes on display devices. The proposed method tracks finger movements while users perform natural gestures within the field of view of the sensor. In the next phase, trajectories are analyzed to extract extended Npen++ features in 3D. These features represent finger movements during the gestures and they are fed to unidirectional left-to-right Hidden Markov Model (HMM) for training. A one-to-one mapping between gestures and shapes is proposed. Finally, shapes corresponding to these gestures are rendered over the display using MuPad interface. We have created a dataset of 5400 samples recorded by 10 volunteers. Our dataset contains 18 geometric and 18 non-geometric shapes such as circle, rectangle, flower, cone, sphere etc. The proposed methodology achieves an accuracy of 92.87% when evaluated using 5-fold cross validation method. Our experiments revel that the extended 3D features perform better than existing 3D features in the context of shape representation and classification. The method can be used for developing useful HCI applications for smart display devices.
Virtual Reality (VR) provides immersive experiences in the virtual world, but it may reduce users awareness of physical surroundings and cause safety concerns and psychological discomfort. Hence, there is a need of an ambient information design to increase users situational awareness (SA) of physical elements when they are immersed in VR environment. This is challenging, since there is a tradeoff between the awareness in reality and the interference with users experience in virtuality. In this paper, we design five representations (indexical, symbolic, and iconic with three emotions) based on two dimensions (vividness and emotion) to address the problem. We conduct an empirical study to evaluate participants SA, perceived breaks in presence (BIPs), and perceived engagement through VR tasks that require movement in space. Results show that designs with higher vividness evoke more SA, designs that are more consistent with the virtual environment can mitigate the BIP issue, and emotion-evoking designs are more engaging.
Providing pedestrians and other vulnerable road users with a clear indication about a fully autonomous vehicle status and intentions is crucial to make them coexist. In the last few years, a variety of external interfaces have been proposed, leveraging different paradigms and technologies including vehicle-mounted devices (like LED panels), short-range on-road projections, and road infrastructure interfaces (e.g., special asphalts with embedded displays). These designs were experimented in different settings, using mockups, specially prepared vehicles, or virtual environments, with heterogeneous evaluation metrics. Promising interfaces based on Augmented Reality (AR) have been proposed too, but their usability and effectiveness have not been tested yet. This paper aims to complement such body of literature by presenting a comparison of state-of-the-art interfaces and new designs under common conditions. To this aim, an immersive Virtual Reality-based simulation was developed, recreating a well-known scenario represented by pedestrians crossing in urban environments under non-regulated conditions. A user study was then performed to investigate the various dimensions of vehicle-to-pedestrian interaction leveraging objective and subjective metrics. Even though no interface clearly stood out over all the considered dimensions, one of the AR designs achieved state-of-the-art results in terms of safety and trust, at the cost of higher cognitive effort and lower intuitiveness compared to LED panels showing anthropomorphic features. Together with rankings on the various dimensions, indications about advantages and drawbacks of the various alternatives that emerged from this study could provide important information for next developments in the field.
Haptic sensory feedback has been shown to complement the visual and auditory senses, improve user performance and provide a greater sense of togetherness in collaborative and interactive virtual environments. However, we are faced with numerous challenges when deploying these systems over the present day Internet. The most significant of these challenges are the network performance limitations of the Wide Area Networks. In this paper, we offer a structured examination of the current challenges in the deployment of haptic-based distributed systems by analyzing the recent advances in the understanding of these challenges and the progress that has been made to overcome them.
We developed a novel assessment platform with untethered virtual reality, 3-dimensional sounds, and pressure sensing floor mat to help assess the walking balance and negotiation of obstacles given diverse sensory load and/or cognitive load. The platform provides an immersive 3D city-like scene with anticipated/unanticipated virtual obstacles. Participants negotiate the obstacles with perturbations of: auditory load by spatial audio, cognitive load by a memory task, and visual flow by generated by avatars movements at various amounts and speeds. A VR headset displays the scenes while providing real-time position and orientation of the participants head. A pressure-sensing walkway senses foot pressure and visualizes it in a heatmap. The system helps to assess walking balance via pressure dynamics per foot, success rate of crossing obstacles, available response time as well as head kinematics in response to obstacles and multitasking. Based on the assessment, specific balance training and fall prevention program can be prescribed.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا