Do you want to publish a course? Click here

Learning Navigation Skills for Legged Robots with Learned Robot Embeddings

153   0   0.0 ( 0 )
 Added by Joanne Truong
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Recent work has shown results on learning navigation policies for idealized cylinder agents in simulation and transferring them to real wheeled robots. Deploying such navigation policies on legged robots can be challenging due to their complex dynamics, and the large dynamical difference between cylinder agents and legged systems. In this work, we learn hierarchical navigation policies that account for the low-level dynamics of legged robots, such as maximum speed, slipping, contacts, and learn to successfully navigate cluttered indoor environments. To enable transfer of policies learned in simulation to new legged robots and hardware, we learn dynamics-aware navigation policies across multiple robots with robot-specific embeddings. The learned embedding is optimized on new robots, while the rest of the policy is kept fixed, allowing for quick adaptation. We train our policies across three legged robots in simulation - 2 quadrupeds (A1, AlienGo) and a hexapod (Daisy). At test time, we study the performance of our learned policy on two new legged robots in simulation (Laikago, 4-legged Daisy), and one real-world quadrupedal robot (A1). Our experiments show that our learned policy can sample-efficiently generalize to previously unseen robots, and enable sim-to-real transfer of navigation policies for legged robots.



rate research

Read More

Learning adaptable policies is crucial for robots to operate autonomously in our complex and quickly changing world. In this work, we present a new meta-learning method that allows robots to quickly adapt to changes in dynamics. In contrast to gradient-based meta-learning algorithms that rely on second-order gradient estimation, we introduce a more noise-tolerant Batch Hill-Climbing adaptation operator and combine it with meta-learning based on evolutionary strategies. Our method significantly improves adaptation to changes in dynamics in high noise settings, which are common in robotics applications. We validate our approach on a quadruped robot that learns to walk while subject to changes in dynamics. We observe that our method significantly outperforms prior gradient-based approaches, enabling the robot to adapt its policy to changes based on less than 3 minutes of real data.
This paper proposes an end-to-end deep reinforcement learning approach for mobile robot navigation with dynamic obstacles avoidance. Using experience collected in a simulation environment, a convolutional neural network (CNN) is trained to predict proper steering actions of a robot from its egocentric local occupancy maps, which accommodate various sensors and fusion algorithms. The trained neural network is then transferred and executed on a real-world mobile robot to guide its local path planning. The new approach is evaluated both qualitatively and quantitatively in simulation and real-world robot experiments. The results show that the map-based end-to-end navigation model is easy to be deployed to a robotic platform, robust to sensor noise and outperforms other existing DRL-based models in many indicators.
This paper studies how to improve the generalization performance and learning speed of the navigation agents trained with deep reinforcement learning (DRL). DRL exhibits huge potential in mapless navigation, but DRL agents performing well in training scenarios are found to perform poorly in unfamiliar real-world scenarios. In this work, we present the representation of LiDAR readings as a key factor behind agents performance degradation and propose a simple but powerful input pre-processing (IP) approach to improve the agents performance. As this approach uses adaptively parametric reciprocal functions to pre-process LiDAR readings, we refer to this approach as IPAPRec and its normalized version as IPAPRecN. IPAPRec/IPAPRecN can highlight important short-distance values and compress the range of less-important long-distance values in laser scans, which well addressed the issues induced by conventional representations of laser scans. Their high performance is validated by extensive simulation and real-world experiments. The results show that our methods can substantially improve agents success rates and greatly reduce the training time compared to conventional methods.
Recent literature in the robotics community has focused on learning robot behaviors that abstract out lower-level details of robot control. To fully leverage the efficacy of such behaviors, it is necessary to select and sequence them to achieve a given task. In this paper, we present an approach to both learn and sequence robot behaviors, applied to the problem of visual navigation of mobile robots. We construct a layered representation of control policies composed of low- level behaviors and a meta-level policy. The low-level behaviors enable the robot to locomote in a particular environment while avoiding obstacles, and the meta-level policy actively selects the low-level behavior most appropriate for the current situation based purely on visual feedback. We demonstrate the effectiveness of our method on three simulated robot navigation tasks: a legged hexapod robot which must successfully traverse varying terrain, a wheeled robot which must navigate a maze-like course while avoiding obstacles, and finally a wheeled robot navigating in the presence of dynamic obstacles. We show that by learning control policies in a layered manner, we gain the ability to successfully traverse new compound environments composed of distinct sub-environments, and outperform both the low-level behaviors in their respective sub-environments, as well as a hand-crafted selection of low-level policies on these compound environments.
The ability of legged systems to traverse highly-constrained environments depends by and large on the performance of their motion and balance controllers. This paper presents a controller that excels in a scenario that most state-of-the-art balance controllers have not yet addressed: line walking, or walking on nearly null support regions. Our approach uses a low-dimensional virtual model (2-DoF) to generate balancing actions through a previously derived four-term balance controller and transforms them to the robot through a derived kinematic mapping. The capabilities of this controller are tested in simulation, where we show the 90kg quadruped robot HyQ crossing a bridge of only 6 cm width (compared to its 4 cm diameter spherical foot), by balancing on two feet at any time while moving along a line. Lastly, we present our preliminary experimental results showing HyQ balancing on two legs while being disturbed.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا