No Arabic abstract
Quadruped robots manifest great potential to traverse rough terrains with payload. Numerous traditional control methods for legged dynamic locomotion are model-based and exhibit high sensitivity to model uncertainties and payload variations. Therefore, high-performance model parameter estimation becomes indispensable. However, the inertia parameters of payload are usually unknown and dynamically changing when the quadruped robot is deployed in versatile tasks. To address this problem, online identification of the inertia parameters and the Center of Mass (CoM) position of the payload for the quadruped robots draw an increasing interest. This study presents an adaptive controller based on the online payload identification for the high payload capacity (the ratio between payload and robots self-weight) quadruped locomotion. We name it as Adaptive Controller for Quadruped Locomotion (ACQL), which consists of a recursive update law and a control law. ACQL estimates the external forces and torques induced by the payload online. The estimation is incorporated in inverse-dynamics-based Quadratic Programming (QP) to realize a trotting gait. As such, the tracking accuracy of the robots CoM and orientation trajectories are improved. The proposed method, ACQL, is verified in a real quadruped robot platform. Experiments prove the estimation efficacy for the payload weighing from 20 kg to 75 kg and loaded at different locations of the robots torso.
Legged robot locomotion requires the planning of stable reference trajectories, especially while traversing uneven terrain. The proposed trajectory optimization framework is capable of generating dynamically stable base and footstep trajectories for multiple steps. The locomotion task can be defined with contact locations, base motion or both, making the algorithm suitable for multiple scenarios (e.g., presence of moving obstacles). The planner uses a simplified momentum-based task space model for the robot dynamics, allowing computation times that are fast enough for online replanning.This fast planning capabilitiy also enables the quadruped to accommodate for drift and environmental changes. The algorithm is tested on simulation and a real robot across multiple scenarios, which includes uneven terrain, stairs and moving obstacles. The results show that the planner is capable of generating stable trajectories in the real robot even when a box of 15 cm height is placed in front of its path at the last moment.
Despite the great progress in quadrupedal robotics during the last decade, selecting good contacts (footholds) in highly uneven and cluttered environments still remains an open challenge. This paper builds upon a state-of-the-art approach, already su
The potential diagnostic applications of magnet-actuated capsules have been greatly increased in recent years. For most of these potential applications, accurate position control of the capsule have been highly demanding. However, the friction between the robot and the environment as well as the drag force from the tether play a significant role during the motion control of the capsule. Moreover, these forces especially the friction force are typically hard to model beforehand. In this paper, we first designed a magnet-actuated tethered capsule robot, where the driving magnet is mounted on the end of a robotic arm. Then, we proposed a learning-based approach to model the friction force between the capsule and the environment, with the goal of increasing the control accuracy of the whole system. Finally, several real robot experiments are demonstrated to showcase the effectiveness of our proposed approach.
We introduce reachability analysis for the formal examination of robots. We propose a novel identification method, which preserves reachset conformance of linear systems. We additionally propose a simultaneous identification and control synthesis scheme to obtain optimal controllers with formal guarantees. In a case study, we examine the effectiveness of using reachability analysis to synthesize a state-feedback controller, a velocity observer, and an output feedback controller.
Animals have remarkable abilities to adapt locomotion to different terrains and tasks. However, robots trained by means of reinforcement learning are typically able to solve only a single task and a transferred policy is usually inferior to that trained from scratch. In this work, we demonstrate that meta-reinforcement learning can be used to successfully train a robot capable to solve a wide range of locomotion tasks. The performance of the meta-trained robot is similar to that of a robot that is trained on a single task.