No Arabic abstract
Surface electromyography (sEMG) is a non-invasive method of measuring neuromuscular potentials generated when the brain instructs the body to perform both fine and coarse locomotion. This technique has seen extensive investigation over the last two decades, with significant advances in both the hardware and signal processing methods used to collect and analyze sEMG signals. While early work focused mainly on medical applications, there has been growing interest in utilizing sEMG as a sensing modality to enable next-generation, high-bandwidth, and natural human-machine interfaces. In the first part of this review, we briefly overview the human skeletomuscular physiology that gives rise to sEMG signals followed by a review of developments in sEMG acquisition hardware. Special attention is paid towards the fidelity of these devices as well as form factor, as recent advances have pushed the limits of user comfort and high-bandwidth acquisition. In the second half of the article, we explore work quantifying the information content of natural human gestures and then review the various signal processing and machine learning methods developed to extract information in sEMG signals. Finally, we discuss the future outlook in this field, highlighting the key gaps in current methods to enable seamless natural interactions between humans and machines.
Personality has been identified as a vital factor in understanding the quality of human robot interactions. Despite this the research in this area remains fragmented and lacks a coherent framework. This makes it difficult to understand what we know and identify what we do not. As a result our knowledge of personality in human robot interactions has not kept pace with the deployment of robots in organizations or in our broader society. To address this shortcoming, this paper reviews 83 articles and 84 separate studies to assess the current state of human robot personality research. This review: (1) highlights major thematic research areas, (2) identifies gaps in the literature, (3) derives and presents major conclusions from the literature and (4) offers guidance for future research.
Visual querying is essential for interactively exploring massive trajectory data. However, the data uncertainty imposes profound challenges to fulfill advanced analytics requirements. On the one hand, many underlying data does not contain accurate geographic coordinates, e.g., positions of a mobile phone only refer to the regions (i.e., mobile cell stations) in which it resides, instead of accurate GPS coordinates. On the other hand, domain experts and general users prefer a natural way, such as using a natural language sentence, to access and analyze massive movement data. In this paper, we propose a visual analytics approach that can extract spatial-temporal constraints from a textual sentence and support an effective query method over uncertain mobile trajectory data. It is built up on encoding massive, spatially uncertain trajectories by the semantic information of the POIs and regions covered by them, and then storing the trajectory documents in text database with an effective indexing scheme. The visual interface facilitates query condition specification, situation-aware visualization, and semantic exploration of large trajectory data. Usage scenarios on real-world human mobility datasets demonstrate the effectiveness of our approach.
Human action recognition is used in many applications such as video surveillance, human computer interaction, assistive living, and gaming. Many papers have appeared in the literature showing that the fusion of vision and inertial sensing improves recognition accuracies compared to the situations when each sensing modality is used individually. This paper provides a survey of the papers in which vision and inertial sensing are used simultaneously within a fusion framework in order to perform human action recognition. The surveyed papers are categorized in terms of fusion approaches, features, classifiers, as well as multimodality datasets considered. Challenges as well as possible future directions are also stated for deploying the fusion of these two sensing modalities under realistic conditions.
We review the current technology underlying surface haptics that converts passive touch surfaces to active ones (machine haptics), our perception of tactile stimuli displayed through active touch surfaces (human haptics), their potential applications (human-machine interaction), and finally the challenges ahead of us in making them available through commercial systems. This review primarily covers the tactile interactions of human fingers or hands with surface-haptics displays by focusing on the three most popular actuation methods: vibrotactile, electrostatic, and ultrasonic.
This work describes a new human-in-the-loop (HitL) assistive grasping system for individuals with varying levels of physical capabilities. We investigated the feasibility of using four potential input devices with our assistive grasping system interface, using able-bodied individuals to define a set of quantitative metrics that could be used to assess an assistive grasping system. We then took these measurements and created a generalized benchmark for evaluating the effectiveness of any arbitrary input device into a HitL grasping system. The four input devices were a mouse, a speech recognition device, an assistive switch, and a novel sEMG device developed by our group that was connected either to the forearm or behind the ear of the subject. These preliminary results provide insight into how different interface devices perform for generalized assistive grasping tasks and also highlight the potential of sEMG based control for severely disabled individuals.