ﻻ يوجد ملخص باللغة العربية
We present a novel method for safely navigating a robot in unknown and uneven outdoor terrains. Our approach trains a novel Deep Reinforcement Learning (DRL)-based network with channel and spatial attention modules using a novel reward function to compute an attention map of the environment. The attention map identifies regions in the environments elevation map with high elevation gradients where the robot could have reduced stability or even flip over. We transform this attention map into a 2D navigation cost-map, which encodes the planarity (level of flatness) of the terrain. Using the cost-map, we formulate a novel method for computing local least-cost waypoints leading to the robots goal and integrate our approach with DWA-RL, a state-of-the-art navigation method. Our approach guarantees safe, locally least-cost paths and dynamically feasible robot velocities in highly uneven terrains. Our hybrid approach also leads to a low sim-to-real gap, which arises while training DRL networks. We observe an improvement in terms of success rate, the cumulative elevation gradient of the robots trajectory, and the safety of the robots velocity. We evaluate our method on a real Husky robot in highly uneven real-world terrains and demonstrate its benefits.
This paper proposes an end-to-end deep reinforcement learning approach for mobile robot navigation with dynamic obstacles avoidance. Using experience collected in a simulation environment, a convolutional neural network (CNN) is trained to predict pr
Robot navigation in a safe way for complex and crowded situations is studied in this work. When facing complex environments with both static and dynamic obstacles, in existing works unicycle nonholonomic robots are prone to two extreme behaviors, one
Mobility in an effective and socially-compliant manner is an essential yet challenging task for robots operating in crowded spaces. Recent works have shown the power of deep reinforcement learning techniques to learn socially cooperative policies. Ho
Safe and efficient navigation through human crowds is an essential capability for mobile robots. Previous work on robot crowd navigation assumes that the dynamics of all agents are known and well-defined. In addition, the performance of previous meth
We present a novel Deep Reinforcement Learning (DRL) based policy to compute dynamically feasible and spatially aware velocities for a robot navigating among mobile obstacles. Our approach combines the benefits of the Dynamic Window Approach (DWA) in