No Arabic abstract
Biological systems, including human beings, have the innate ability to perform complex tasks in versatile and agile manner. Researchers in sensorimotor control have tried to understand and formally define this innate property. The idea, supported by several experimental findings, that biological systems are able to combine and adapt basic units of motion into complex tasks finally lead to the formulation of the motor primitives theory. In this respect, Dynamic Movement Primitives (DMPs) represent an elegant mathematical formulation of the motor primitives as stable dynamical systems, and are well suited to generate motor commands for artificial systems like robots. In the last decades, DMPs have inspired researchers in different robotic fields including imitation and reinforcement learning, optimal control,physical interaction, and human-robot co-working, resulting a considerable amount of published papers. The goal of this tutorial survey is two-fold. On one side, we present the existing DMPs formulations in rigorous mathematical terms,and discuss advantages and limitations of each approach as well as practical implementation details. In the tutorial vein, we also search for existing implementations of presented approaches and release several others. On the other side, we provide a systematic and comprehensive review of existing literature and categorize state of the art work on DMP. The paper concludes with a discussion on the limitations of DMPs and an outline of possible research directions.
In many robot control problems, factors such as stiffness and damping matrices and manipulability ellipsoids are naturally represented as symmetric positive definite (SPD) matrices, which capture the specific geometric characteristics of those factors. Typical learned skill models such as dynamic movement primitives (DMPs) can not, however, be directly employed with quantities expressed as SPD matrices as they are limited to data in Euclidean space. In this paper, we propose a novel and mathematically principled framework that uses Riemannian metrics to reformulate DMPs such that the resulting formulation can operate with SPD data in the SPD manifold. Evaluation of the approach demonstrates that beneficial properties of DMPs such as change of the goal during operation apply also to the proposed formulation.
Movement primitives are an important policy class for real-world robotics. However, the high dimensionality of their parametrization makes the policy optimization expensive both in terms of samples and computation. Enabling an efficient representation of movement primitives facilitates the application of machine learning techniques such as reinforcement on robotics. Motions, especially in highly redundant kinematic structures, exhibit high correlation in the configuration space. For these reasons, prior work has mainly focused on the application of dimensionality reduction techniques in the configuration space. In this paper, we investigate the application of dimensionality reduction in the parameter space, identifying principal movements. The resulting approach is enriched with a probabilistic treatment of the parameters, inheriting all the properties of the Probabilistic Movement Primitives. We test the proposed technique both on a real robotic task and on a database of complex human movements. The empirical analysis shows that the dimensionality reduction in parameter space is more effective than in configuration space, as it enables the representation of the movements with a significant reduction of parameters.
The realization of motion description is a challenging work for fixed-wing Unmanned Aerial Vehicle (UAV) acrobatic flight, due to the inherent coupling problem in ranslational-rotational motion. This paper aims to develop a novel maneuver description method through the idea of imitation learning, and there are two main contributions of our work: 1) A dual quaternion based dynamic motion primitives (DQ-DMP) is proposed and the state equations of the position and attitude can be combined without loss of accuracy. 2) An online hardware-inthe-loop (HITL) training system is established. Based on the DQDMP method, the geometric features of the demonstrated maneuver can be obtained in real-time, and the stability of the DQ-DMP is theoretically proved. The simulation results illustrate the superiority of the proposed method compared to the traditional position/attitude decoupling method.
Robots applications in our daily life increase at an unprecedented pace. As robots will soon operate out in the wild, we must identify the safety and security vulnerabilities they will face. Robotics researchers and manufacturers focus their attention on new, cheaper, and more reliable applications. Still, they often disregard the operability in adversarial environments where a trusted or untrusted user can jeopardize or even alter the robots task. In this paper, we identify a new paradigm of security threats in the next generation of robots. These threats fall beyond the known hardware or network-based ones, and we must find new solutions to address them. These new threats include malicious use of the robots privileged access, tampering with the robot sensors system, and tricking the robots deliberation into harmful behaviors. We provide a taxonomy of attacks that exploit these vulnerabilities with realistic examples, and we outline effective countermeasures to prevent better, detect, and mitigate them.
We formalize decision-making problems in robotics and automated control using continuous MDPs and actions that take place over continuous time intervals. We then approximate the continuous MDP using finer and finer discretizations. Doing this results in a family of systems, each of which has an extremely large action space, although only a few actions are interesting. We can view the decision maker as being unaware of which actions are interesting. We can model this using MDPUs, MDPs with unawareness, where the action space is much smaller. As we show, MDPUs can be used as a general framework for learning tasks in robotic problems. We prove results on the difficulty of learning a near-optimal policy in an an MDPU for a continuous task. We apply these ideas to the problem of having a humanoid robot learn on its own how to walk.