No Arabic abstract
We focus on the problem of developing energy efficient controllers for quadrupedal robots. Animals can actively switch gaits at different speeds to lower their energy consumption. In this paper, we devise a hierarchical learning framework, in which distinctive locomotion gaits and natural gait transitions emerge automatically with a simple reward of energy minimization. We use reinforcement learning to train a high-level gait policy that specifies gait patterns of each foot, while the low-level whole-body controller optimizes the motor commands so that the robot can walk at a desired velocity using that gait pattern. We test our learning framework on a quadruped robot and demonstrate automatic gait transitions, from walking to trotting and to fly-trotting, as the robot increases its speed. We show that the learned hierarchical controller consumes much less energy across a wide range of locomotion speed than baseline controllers.
The planning of whole-body motion and step time for bipedal locomotion is constructed as a model predictive control (MPC) problem, in which a sequence of optimization problems needs to be solved online. While directly solving these problems is extremely time-consuming, we propose a predictive gait synthesizer to offer immediate solutions. Based on the full-dimensional model, a library of gaits with different speeds and periods is first constructed offline. Then the proposed gait synthesizer generates real-time gaits at 1kHz by synthesizing the gait library based on the online prediction of centroidal dynamics. We prove that the constructed MPC problem can ensure the uniform ultimate boundedness (UUB) of the CoM states and show that our proposed gait synthesizer can provide feasible solutions to the MPC optimization problems. Simulation and experimental results on a bipedal robot with 8 degrees of freedom (DoF) are provided to show the performance and robustness of this approach.
Traditional motion planning approaches for multi-legged locomotion divide the problem into several stages, such as contact search and trajectory generation. However, reasoning about contacts and motions simultaneously is crucial for the generation of complex whole-body behaviors. Currently, coupling theses problems has required either the assumption of a fixed gait sequence and flat terrain condition, or non-convex optimization with intractable computation time. In this paper, we propose a mixed-integer convex formulation to plan simultaneously contact locations, gait transitions and motion, in a computationally efficient fashion. In contrast to previous works, our approach is not limited to flat terrain nor to a pre-specified gait sequence. Instead, we incorporate the friction cone stability margin, approximate the robots torque limits, and plan the gait using mixed-integer convex constraints. We experimentally validated our approach on the HyQ robot by traversing different challenging terrains, where non-convexity and flat terrain assumptions might lead to sub-optimal or unstable plans. Our method increases the motion generality while keeping a low computation time.
We present a hierarchical framework that combines model-based control and reinforcement learning (RL) to synthesize robust controllers for a quadruped (the Unitree Laikago). The system consists of a high-level controller that learns to choose from a set of primitives in response to changes in the environment and a low-level controller that utilizes an established control method to robustly execute the primitives. Our framework learns a controller that can adapt to challenging environmental changes on the fly, including novel scenarios not seen during training. The learned controller is up to 85~percent more energy efficient and is more robust compared to baseline methods. We also deploy the controller on a physical robot without any randomization or adaptation scheme.
Hierarchical learning has been successful at learning generalizable locomotion skills on walking robots in a sample-efficient manner. However, the low-dimensional latent action used to communicate between two layers of the hierarchy is typically user-designed. In this work, we present a fully-learned hierarchical framework, that is capable of jointly learning the low-level controller and the high-level latent action space. Once this latent space is learned, we plan over continuous latent actions in a model-predictive control fashion, using a learned high-level dynamics model. This framework generalizes to multiple robots, and we present results on a Daisy hexapod simulation, A1 quadruped simulation, and Daisy robot hardware. We compare a range of learned hierarchical approaches from literature, and show that our framework outperforms baselines on multiple tasks and two simulations. In addition to learning approaches, we also compare to inverse-kinematics (IK) acting on desired robot motion, and show that our fully-learned framework outperforms IK in adverse settings on both A1 and Daisy simulations. On hardware, we show the Daisy hexapod achieve multiple locomotion tasks, in an unstructured outdoor setting, with only 2000 hardware samples, reinforcing the robustness and sample-efficiency of our approach.
Many robots move through the world by composing locomotion primitives like steps and turns. To do so well, robots need not have primitives that make intuitive sense to humans. This becomes of paramount importance when robots are damaged and no longer move as designed. Here we propose a goal function we call coverage, that represents the usefulness of a library of locomotion primitives in a manner agnostic to the particulars of the primitives themselves. We demonstrate the ability to optimize coverage on both simulated and physical robots, and show that coverage can be rapidly recovered after injury. This suggests that by optimizing for coverage, robots can sustain their ability to navigate through the world even in the face of significant mechanical failures. The benefits of this approach are enhanced by sample-efficient, data-driven approaches to system identification that can rapidly inform the optimization of primitives. We found that the number of degrees of freedom improved the rate of recovery of our simulated robots, a rare result in the fields of gait optimization and reinforcement learning. We showed that a robot with limbs made of tree branches (for which no CAD model or first principles model was available) is able to quickly find an effective high-coverage library of motion primitives. The optimized primitives are entirely non-obvious to a human observer, and thus are unlikely to be attainable through manual tuning.