ﻻ يوجد ملخص باللغة العربية
Robot path planning model based on RNN and visual quality evaluation in the context of crowds is analyzed in this paper. Mobile robot path planning is the key to robot navigation and an important field in robot research. Let the motion space of the robot be a two-dimensional plane, and the motion of the robot is regarded as a kind of motion under the virtual artificial potential field force when the artificial potential field method is used for the path planning. Compared to simple image acquisition, image acquisition in a complex crowd environment requires image pre-processing first. We mainly use OpenCV calibration tools to pre-process the acquired images. In themethodology design, the RNN-based visual quality evaluation to filter background noise is conducted. After calibration, Gaussian noise and some other redundant information affecting the subsequent operations still exist in the image. Based on RNN, a new image quality evaluation algorithm is developed, and denoising is performed on this basis. Furthermore, the novel path planning model is designed and simulated. The expeirment compared with the state-of-the-art models have shown the robustness of the model.
Planning smooth and energy-efficient motions for wheeled mobile robots is a central task for applications ranging from autonomous driving to service and intralogistic robotics. Over the past decades, a wide variety of motion planners, steer functions
Path planning and collision avoidance are challenging in complex and highly variable environments due to the limited horizon of events. In literature, there are multiple model- and learning-based approaches that require significant computational reso
In this work, we use iterative Linear Quadratic Gaussian (iLQG) to plan motions for a mobile robot with range sensors in belief space. We address two limitations that prevent applications of iLQG to the considered robotic system. First, iLQG assumes
We present a novel learning-based collision avoidance algorithm, CrowdSteer, for mobile robots operating in dense and crowded environments. Our approach is end-to-end and uses multiple perception sensors such as a 2-D lidar along with a depth camera
We aim to enable a mobile robot to navigate through environments with dense crowds, e.g., shopping malls, canteens, train stations, or airport terminals. In these challenging environments, existing approaches suffer from two common problems: the robo