No Arabic abstract
Today, physical Human-Robot Interaction (pHRI) is a very popular topic in the field of ground manipulation. At the same time, Aerial Physical Interaction (APhI) is also developing very fast. Nevertheless, pHRI with aerial vehicles has not been addressed so far. In this work, we present the study of one of the first systems in which a human is physically connected to an aerial vehicle by a cable. We want the robot to be able to pull the human toward a desired position (or along a path) only using forces as an indirect communication-channel. We propose an admittance-based approach that makes pHRI safe. A controller, inspired by the literature on flexible manipulators, computes the desired interaction forces that properly guide the human. The stability of the system is formally proved with a Lyapunov-based argument. The system is also shown to be passive, and thus robust to non-idealities like additional human forces, time-varying inputs, and other external disturbances. We also design a maneuver regulation policy to simplify the path following problem. The global method has been experimentally validated on a group of four subjects, showing a reliable and safe pHRI.
An approach to model and estimate human walking kinematics in real-time for Physical Human-Robot Interaction is presented. The human gait velocity along the forward and vertical direction of motion is modelled according to the Yoyo-model. We designed an Extended Kalman Filter (EKF) algorithm to estimate the frequency, bias and trigonometric state of a biased sinusoidal signal, from which the kinematic parameters of the Yoyo-model can be extracted. Quality and robustness of the estimation are improved by opportune filtering based on heuristics. The approach is successfully evaluated on a real dataset of walking humans, including complex trajectories and changing step frequency over time.
Industrial standards define safety requirements for Human-Robot Collaboration (HRC) in industrial manufacturing. The standards particularly require real-time monitoring and securing of the minimum protective distance between a robot and an operator. In this work, we propose a depth-sensor based model for workspace monitoring and an interactive Augmented Reality (AR) User Interface (UI) for safe HRC. The AR UI is implemented on two different hardware: a projector-mirror setup anda wearable AR gear (HoloLens). We experiment the workspace model and UIs for a realistic diesel motor assembly task. The AR-based interactive UIs provide 21-24% and 57-64% reduction in the task completion and robot idle time, respectively, as compared to a baseline without interaction and workspace sharing. However, subjective evaluations reveal that HoloLens based AR is not yet suitable for industrial manufacturing while the projector-mirror setup shows clear improvements in safety and work ergonomics.
When a robot performs a task next to a human, physical interaction is inevitable: the human might push, pull, twist, or guide the robot. The state-of-the-art treats these interactions as disturbances that the robot should reject or avoid. At best, these robots respond safely while the human interacts; but after the human lets go, these robots simply return to their original behavior. We recognize that physical human-robot interaction (pHRI) is often intentional -- the human intervenes on purpose because the robot is not doing the task correctly. In this paper, we argue that when pHRI is intentional it is also informative: the robot can leverage interactions to learn how it should complete the rest of its current task even after the person lets go. We formalize pHRI as a dynamical system, where the human has in mind an objective function they want the robot to optimize, but the robot does not get direct access to the parameters of this objective -- they are internal to the human. Within our proposed framework human interactions become observations about the true objective. We introduce approximations to learn from and respond to pHRI in real-time. We recognize that not all human corrections are perfect: often users interact with the robot noisily, and so we improve the efficiency of robot learning from pHRI by reducing unintended learning. Finally, we conduct simulations and user studies on a robotic manipulator to compare our proposed approach to the state-of-the-art. Our results indicate that learning from pHRI leads to better task performance and improved human satisfaction.
We design and develop a new shared Augmented Reality (AR) workspace for Human-Robot Interaction (HRI), which establishes a bi-directional communication between human agents and robots. In a prototype system, the shared AR workspace enables a shared perception, so that a physical robot not only perceives the virtual elements in its own view but also infers the utility of the human agent--the cost needed to perceive and interact in AR--by sensing the human agents gaze and pose. Such a new HRI design also affords a shared manipulation, wherein the physical robot can control and alter virtual objects in AR as an active agent; crucially, a robot can proactively interact with human agents, instead of purely passively executing received commands. In experiments, we design a resource collection game that qualitatively demonstrates how a robot perceives, processes, and manipulates in AR and quantitatively evaluates the efficacy of HRI using the shared AR workspace. We further discuss how the system can potentially benefit future HRI studies that are otherwise challenging.
Robot capabilities are maturing across domains, from self-driving cars, to bipeds and drones. As a result, robots will soon no longer be confined to safety-controlled industrial settings; instead, they will directly interact with the general public. The growing field of Human-Robot Interaction (HRI) studies various aspects of this scenario - from social norms to joint action to human-robot teams and more. Researchers in HRI have made great strides in developing models, methods, and algorithms for robots acting with and around humans, but these computational HRI models and algorithms generally do not come with formal guarantees and constraints on their operation. To enable human-interactive robots to move from the lab to real-world deployments, we must address this gap. This article provides an overview of verification, validation and synthesis techniques used to create demonstrably trustworthy systems, describes several HRI domains that could benefit from such techniques, and provides a roadmap for the challenges and the research needed to create formalized and guaranteed human-robot interaction.