ﻻ يوجد ملخص باللغة العربية
Participatory design is a popular design technique that involves the end users in the early stages of the design process to obtain user-friendly gestural interfaces. Guessability studies followed by agreement analyses are often used to elicit and comprehend the preferences (or gestures/proposals) of the participants. Previous approaches to assess agreement, grouped the gestures into equivalence classes and ignored the integral properties that are shared between them. In this work, we represent the gestures using binary description vectors to allow them to be partially similar. In this context, we introduce a new metric referred to as soft agreement rate (SAR) to quantify the level of consensus between the participants. In addition, we performed computational experiments to study the behavior of our partial agreement formula and mathematically show that existing agreement metrics are a special case of our approach. Our methodology was evaluated through a gesture elicitation study conducted with a group of neurosurgeons. Nevertheless, our formulation can be applied to any other user-elicitation study. Results show that the level of agreement obtained by SAR metric is 2.64 times higher than the existing metrics. In addition to the mostly agreed gesture, SAR formulation also provides the mostly agreed descriptors which can potentially help the designers to come up with a final gesture set.
Hand Gesture Recognition (HGR) based on inertial data has grown considerably in recent years, with the state-of-the-art approaches utilizing a single handheld sensor and a vocabulary comprised of simple gestures. In this work we explore the benefit
Text-to-speech and co-speech gesture synthesis have until now been treated as separate areas by two different research communities, and applications merely stack the two technologies using a simple system-level pipeline. This can lead to modeling ine
Head gesture is a natural means of face-to-face communication between people but the recognition of head gestures in the context of virtual reality and use of head gesture as an interface for interacting with virtual avatars and virtual environments
Users intentions may be expressed through spontaneous gesturing, which have been seen only a few times or never before. Recognizing such gestures involves one shot gesture learning. While most research has focused on the recognition of the gestures i
Human-computer interaction (HCI) is crucial for the safety of lives as autonomous vehicles (AVs) become commonplace. Yet, little effort has been put toward ensuring that AVs understand humans on the road. In this paper, we present GLADAS, a simulator