ﻻ يوجد ملخص باللغة العربية
Users intentions may be expressed through spontaneous gesturing, which have been seen only a few times or never before. Recognizing such gestures involves one shot gesture learning. While most research has focused on the recognition of the gestures itself, recently new approaches were proposed to deal with gesture perception and production as part of the same problem. The framework presented in this work focuses on learning the process that leads to gesture generation, rather than mining the gestures associated features. This is achieved using kinematic, cognitive and biomechanic characteristics of human interaction. These factors enable the artificial production of realistic gesture samples originated from a single observation. The generated samples are then used as training sets for different state-of-the-art classifiers. Performance is obtained first, by observing the machines gesture recognition percentages. Then, performance is computed by the human recognition from gestures performed by robots. Based on these two scenarios, a composite new metric of coherency is proposed relating to the amount of agreement between these two conditions. Experimental results provide an average recognition performance of 89.2% for the trained classifiers and 92.5% for the participants. Coherency in recognition was determined at 93.6%. While this new metric is not directly comparable to raw accuracy or other pure performance-based standard metrics, it provides a quantifier for validating how realistic the machine generated samples are and how accurate the resulting mimicry is.
Hand Gesture Recognition (HGR) based on inertial data has grown considerably in recent years, with the state-of-the-art approaches utilizing a single handheld sensor and a vocabulary comprised of simple gestures. In this work we explore the benefit
We introduce AirWare, an in-air hand-gesture recognition system that uses the already embedded speaker and microphone in most electronic devices, together with embedded infrared proximity sensors. Gestures identified by AirWare are performed in the a
Head gesture is a natural means of face-to-face communication between people but the recognition of head gestures in the context of virtual reality and use of head gesture as an interface for interacting with virtual avatars and virtual environments
Text-to-speech and co-speech gesture synthesis have until now been treated as separate areas by two different research communities, and applications merely stack the two technologies using a simple system-level pipeline. This can lead to modeling ine
Participatory design is a popular design technique that involves the end users in the early stages of the design process to obtain user-friendly gestural interfaces. Guessability studies followed by agreement analyses are often used to elicit and com