Do you want to publish a course? Click here

Deep Visual Perception for Dynamic Walking on Discrete Terrain

159   0   0.0 ( 0 )
 Added by Avinash Siravuru
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Dynamic bipedal walking on discrete terrain, like stepping stones, is a challenging problem requiring feedback controllers to enforce safety-critical constraints. To enforce such constraints in real-world experiments, fast and accurate perception for foothold detection and estimation is needed. In this work, a deep visual perception model is designed to accurately estimate step length of the next step, which serves as input to the feedback controller to enable vision-in-the-loop dynamic walking on discrete terrain. In particular, a custom convolutional neural network architecture is designed and trained to predict step length to the next foothold using a sampled image preview of the upcoming terrain at foot impact. The visual input is offered only at the beginning of each step and is shown to be sufficient for the job of dynamically stepping onto discrete footholds. Through extensive numerical studies, we show that the robot is able to successfully autonomously walk for over 100 steps without failure on a discrete terrain with footholds randomly positioned within a step length range of 45-85 centimeters.



rate research

Read More

Dynamic quadrupedal locomotion over rough terrains reveals remarkable progress over the last few decades. Small-scale quadruped robots are adequately flexible and adaptable to traverse uneven terrains along sagittal direction, such as slopes and stairs. To accomplish autonomous locomotion navigation in complex environments, spinning is a fundamental yet indispensable functionality for legged robots. However, spinning behaviors of quadruped robots on uneven terrain often exhibit position drifts. Motivated by this problem, this study presents an algorithmic method to enable accurate spinning motions over uneven terrain and constrain the spinning radius of the Center of Mass (CoM) to be bounded within a small range to minimize the drift risks. A modified spherical foot kinematics representation is proposed to improve the foot kinematic model and rolling dynamics of the quadruped during locomotion. A CoM planner is proposed to generate stable spinning motion based on projected stability margins. Accurate motion tracking is accomplished with Linear Quadratic Regulator (LQR) to bound the position drift during the spinning movement. Experiments are conducted on a small-scale quadruped robot and the effectiveness of the proposed method is verified on versatile terrains including flat ground, stairs and slopes.
We present a novel control strategy for dynamic legged locomotion in complex scenarios, that considers information about the morphology of the terrain in contexts when only on-board mapping and computation are available. The strategy is built on top of two main elements: first a contact sequence task that provides safe foothold locations based on a convolutional neural network to perform fast and continuous evaluation of the terrain in search of safe foothold locations; then a model predictive controller that considers the foothold locations given by the contact sequence task to optimize target ground reaction forces. We assess the performance of our strategy through simulations of the hydraulically actuated quadruped robot HyQReal traversing rough terrain under realistic on-board sensing and computing conditions.
The quality of the visual feedback can vary significantly on a legged robot that is meant to traverse unknown and unstructured terrains. The map of the environment, acquired with online state-of-the-art algorithms, often degrades after a few steps, due to sensing inaccuracies, slippage and unexpected disturbances. When designing locomotion algorithms, this degradation can result in planned trajectories that are not consistent with the reality, if not dealt properly. In this work, we propose a heuristic-based planning approach that enables a quadruped robot to successfully traverse a significantly rough terrain (e.g., stones up to 10 cm of diameter), in absence of visual feedback. When available, the approach allows also to exploit the visual feedback (e.g., to enhance the stepping strategy) in multiple ways, according to the quality of the 3D map. The proposed framework also includes reflexes, triggered in specific situations, and the possibility to estimate online an unknown time-varying disturbance and compensate for it. We demonstrate the effectiveness of the approach with experiments performed on our quadruped robot HyQ (85 kg), traversing different terrains, such as: ramps, rocks, bricks, pallets and stairs. We also demonstrate the capability to estimate and compensate for disturbances, showing the robot walking up a ramp while pulling a cart attached to its back.
We present VILENS (Visual Inertial Lidar Legged Navigation System), an odometry system for legged robots based on factor graphs. The key novelty is the tight fusion of four different sensor modalities to achieve reliable operation when the individual sensors would otherwise produce degenerate estimation. To minimize leg odometry drift, we extend the robots state with a linear velocity bias term which is estimated online. This bias is only observable because of the tight fusion of this preintegrated velocity factor with vision, lidar, and IMU factors. Extensive experimental validation on the ANYmal quadruped robots is presented, for a total duration of 2 h and 1.8 km traveled. The experiments involved dynamic locomotion over loose rocks, slopes, and mud; these included perceptual challenges, such as dark and dusty underground caverns or open, feature-deprived areas, as well as mobility challenges such as slipping and terrain deformation. We show an average improvement of 62% translational and 51% rotational errors compared to a state-of-the-art loosely coupled approach. To demonstrate its robustness, VILENS was also integrated with a perceptive controller and a local path planner.
249 - Yuan Gao , Yan Gu 2021
Real-world applications of bipedal robot walking require accurate, real-time state estimation. State estimation for locomotion over dynamic rigid surfaces (DRS), such as elevators, ships, public transport vehicles, and aircraft, remains under-explored, although state estimator designs for stationary rigid surfaces have been extensively studied. Addressing DRS locomotion in state estimation is a challenging problem mainly due to the nonlinear, hybrid nature of walking dynamics, the nonstationary surface-foot contact points, and hardware imperfections (e.g., limited availability, noise, and drift of onboard sensors). Towards solving this problem, we introduce an Invariant Extended Kalman Filter (InEKF) whose process and measurement models explicitly consider the DRS movement and hybrid walking behaviors while respectively satisfying the group-affine condition and invariant form. Due to these attractive properties, the estimation error convergence of the filter is provably guaranteed for hybrid DRS locomotion. The measurement model of the filter also exploits the holonomic constraint associated with the support-foot and surface orientations, under which the robots yaw angle in the world becomes observable in the presence of general DRS movement. Experimental results of bipedal walking on a rocking treadmill demonstrate the proposed filter ensures the rapid error convergence and observable base yaw angle.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا