ترغب بنشر مسار تعليمي؟ اضغط هنا

Design and Experimental Evaluation of a Hierarchical Controller for an Autonomous Ground Vehicle with Large Uncertainties

96   0   0.0 ( 0 )
 نشر من قبل Juncheng Li
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Autonomous ground vehicles (AGVs) are receiving increasing attention, and the motion planning and control problem for these vehicles has become a hot research topic. In real applications such as material handling, an AGV is subject to large uncertainties and its motion planning and control become challenging. In this paper, we investigate this problem by proposing a hierarchical control scheme, which is integrated by a model predictive control (MPC) based path planning and trajectory tracking control at the high level, and a reduced-order extended state observer (RESO) based dynamic control at the low level. The control at the high level consists of an MPC-based improved path planner, a velocity planner, and an MPC-based tracking controller. Both the path planning and trajectory tracking control problems are formulated under an MPC framework. The control at the low level employs the idea of active disturbance rejection control (ADRC). The uncertainties are estimated via a RESO and then compensated in the control in real-time. We show that, for the first-order uncertain AGV dynamic model, the RESO-based control only needs to know the control direction. Finally, simulations and experiments on an AGV with different payloads are conducted. The results illustrate that the proposed hierarchical control scheme achieves satisfactory motion planning and control performance with large uncertainties.



قيم البحث

اقرأ أيضاً

Autonomous mobile robots have the potential to solve missions that are either too complex or dangerous to be accomplished by humans. In this paper, we address the design and autonomous deployment of a ground vehicle equipped with a robotic arm for ur ban firefighting scenarios. We describe the hardware design and algorithm approaches for autonomous navigation, planning, fire source identification and abatement in unstructured urban scenarios. The approach employs on-board sensors for autonomous navigation and thermal camera information for source identification. A custom electro{mechanical pump is responsible to eject water for fire abatement. The proposed approach is validated through several experiments, where we show the ability to identify and abate a sample heat source in a building. The whole system was developed and deployed during the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2020, for Challenge No. 3 Fire Fighting Inside a High-Rise Building and during the Grand Challenge where our approach scored the highest number of points among all UGV solutions and was instrumental to win the first place.
Environmental monitoring of marine environments presents several challenges: the harshness of the environment, the often remote location, and most importantly, the vast area it covers. Manual operations are time consuming, often dangerous, and labor intensive. Operations from oceanographic vessels are costly and limited to open seas and generally deeper bodies of water. In addition, with lake, river, and ocean shoreline being a finite resource, waterfront property presents an ever increasing valued commodity, requiring exploration and continued monitoring of remote waterways. In order to efficiently explore and monitor currently known marine environments as well as reach and explore remote areas of interest, we present a design of an autonomous surface vehicle (ASV) with the power to cover large areas, the payload capacity to carry sufficient power and sensor equipment, and enough fuel to remain on task for extended periods. An analysis of the design and a discussion on lessons learned during deployments is presented in this paper.
Autonomous vehicle manufacturers recognize that LiDAR provides accurate 3D views and precise distance measures under highly uncertain driving conditions. Its practical implementation, however, remains costly. This paper investigates the optimal LiDAR configuration problem to achieve utility maximization. We use the perception area and non-detectable subspace to construct the design procedure as solving a min-max optimization problem and propose a bio-inspired measure -- volume to surface area ratio (VSR) -- as an easy-to-evaluate cost function representing the notion of the size of the non-detectable subspaces of a given configuration. We then adopt a cuboid-based approach to show that the proposed VSR-based measure is a well-suited proxy for object detection rate. It is found that the Artificial Bee Colony evolutionary algorithm yields a tractable cost function computation. Our experiments highlight the effectiveness of our proposed VSR measure in terms of cost-effectiveness configuration as well as providing insightful analyses that can improve the design of AV systems.
Model predictive control (MPC) is widely used for path tracking of autonomous vehicles due to its ability to handle various types of constraints. However, a considerable predictive error exists because of the error of mathematics model or the model l inearization. In this paper, we propose a framework combining the MPC with a learning-based error estimator and a feedforward compensator to improve the path tracking accuracy. An extreme learning machine is implemented to estimate the model based predictive error from vehicle state feedback information. Offline training data is collected from a vehicle controlled by a model-defective regular MPC for path tracking in several working conditions, respectively. The data include vehicle state and the spatial error between the current actual position and the corresponding predictive position. According to the estimated predictive error, we then design a PID-based feedforward compensator. Simulation results via Carsim show the estimation accuracy of the predictive error and the effectiveness of the proposed framework for path tracking of an autonomous vehicle.
Project AutoVision aims to develop localization and 3D scene perception capabilities for a self-driving vehicle. Such capabilities will enable autonomous navigation in urban and rural environments, in day and night, and with cameras as the only exter oceptive sensors. The sensor suite employs many cameras for both 360-degree coverage and accurate multi-view stereo; the use of low-cost cameras keeps the cost of this sensor suite to a minimum. In addition, the project seeks to extend the operating envelope to include GNSS-less conditions which are typical for environments with tall buildings, foliage, and tunnels. Emphasis is placed on leveraging multi-view geometry and deep learning to enable the vehicle to localize and perceive in 3D space. This paper presents an overview of the project, and describes the sensor suite and current progress in the areas of calibration, localization, and perception.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا