No Arabic abstract
We present a novel method for safely navigating a robot in unknown and uneven outdoor terrains. Our approach trains a novel Deep Reinforcement Learning (DRL)-based network with channel and spatial attention modules using a novel reward function to compute an attention map of the environment. The attention map identifies regions in the environments elevation map with high elevation gradients where the robot could have reduced stability or even flip over. We transform this attention map into a 2D navigation cost-map, which encodes the planarity (level of flatness) of the terrain. Using the cost-map, we formulate a novel method for computing local least-cost waypoints leading to the robots goal and integrate our approach with DWA-RL, a state-of-the-art navigation method. Our approach guarantees safe, locally least-cost paths and dynamically feasible robot velocities in highly uneven terrains. Our hybrid approach also leads to a low sim-to-real gap, which arises while training DRL networks. We observe an improvement in terms of success rate, the cumulative elevation gradient of the robots trajectory, and the safety of the robots velocity. We evaluate our method on a real Husky robot in highly uneven real-world terrains and demonstrate its benefits.
This paper proposes an end-to-end deep reinforcement learning approach for mobile robot navigation with dynamic obstacles avoidance. Using experience collected in a simulation environment, a convolutional neural network (CNN) is trained to predict proper steering actions of a robot from its egocentric local occupancy maps, which accommodate various sensors and fusion algorithms. The trained neural network is then transferred and executed on a real-world mobile robot to guide its local path planning. The new approach is evaluated both qualitatively and quantitatively in simulation and real-world robot experiments. The results show that the map-based end-to-end navigation model is easy to be deployed to a robotic platform, robust to sensor noise and outperforms other existing DRL-based models in many indicators.
Robot navigation in a safe way for complex and crowded situations is studied in this work. When facing complex environments with both static and dynamic obstacles, in existing works unicycle nonholonomic robots are prone to two extreme behaviors, one is to fall into dead ends formed by obstacles, and the other is to not complete the navigation task in time due to excessive collision avoidance.As a result, we propose the R-SARL framework, which is based on a deep reinforcement learning algorithm and where we augment the reward function to avoid collisions. In particular, we estimate unsafe interactions between the robot and obstacles in a look-ahead distance and penalize accordingly, so that the robot can avoid collisions in advance and reach its destination safely.Furthermore, we penalize frequent excessive detours to reduce the timeout and thus improve the efficiency of navigation.We test our method in various challenging and complex crowd navigation tasks. The results show that our method improves navigation performance and outperforms state-of-the-art methods.
Mobility in an effective and socially-compliant manner is an essential yet challenging task for robots operating in crowded spaces. Recent works have shown the power of deep reinforcement learning techniques to learn socially cooperative policies. However, their cooperation ability deteriorates as the crowd grows since they typically relax the problem as a one-way Human-Robot interaction problem. In this work, we want to go beyond first-order Human-Robot interaction and more explicitly model Crowd-Robot Interaction (CRI). We propose to (i) rethink pairwise interactions with a self-attention mechanism, and (ii) jointly model Human-Robot as well as Human-Human interactions in the deep reinforcement learning framework. Our model captures the Human-Human interactions occurring in dense crowds that indirectly affects the robots anticipation capability. Our proposed attentive pooling mechanism learns the collective importance of neighboring humans with respect to their future states. Various experiments demonstrate that our model can anticipate human dynamics and navigate in crowds with time efficiency, outperforming state-of-the-art methods.
Safe and efficient navigation through human crowds is an essential capability for mobile robots. Previous work on robot crowd navigation assumes that the dynamics of all agents are known and well-defined. In addition, the performance of previous methods deteriorates in partially observable environments and environments with dense crowds. To tackle these problems, we propose decentralized structural-Recurrent Neural Network (DS-RNN), a novel network that reasons about spatial and temporal relationships for robot decision making in crowd navigation. We train our network with model-free deep reinforcement learning without any expert supervision. We demonstrate that our model outperforms previous methods in challenging crowd navigation scenarios. We successfully transfer the policy learned in the simulator to a real-world TurtleBot 2i.
We present a novel Deep Reinforcement Learning (DRL) based policy to compute dynamically feasible and spatially aware velocities for a robot navigating among mobile obstacles. Our approach combines the benefits of the Dynamic Window Approach (DWA) in terms of satisfying the robots dynamics constraints with state-of-the-art DRL-based navigation methods that can handle moving obstacles and pedestrians well. Our formulation achieves these goals by embedding the environmental obstacles motions in a novel low-dimensional observation space. It also uses a novel reward function to positively reinforce velocities that move the robot away from the obstacles heading direction leading to significantly lower number of collisions. We evaluate our method in realistic 3-D simulated environments and on a real differential drive robot in challenging dense indoor scenarios with several walking pedestrians. We compare our method with state-of-the-art collision avoidance methods and observe significant improvements in terms of success rate (up to 33% increase), number of dynamics constraint violations (up to 61% decrease), and smoothness. We also conduct ablation studies to highlight the advantages of our observation space formulation, and reward structure.