ترغب بنشر مسار تعليمي؟ اضغط هنا

Flexible MPC-based Conflict Resolution Using Online Adaptive ADMM

67   0   0.0 ( 0 )
 نشر من قبل Jerry An
 تاريخ النشر 2021
والبحث باللغة English
 تأليف Jerry An




اسأل ChatGPT حول البحث

Decentralized conflict resolution for autonomous vehicles is needed in many places where a centralized method is not feasible, e.g., parking lots, rural roads, merge lanes, etc. However, existing methods generally do not fully utilize optimization in decentralized conflict resolution. We propose a decentralized conflict resolution method for autonomous vehicles based on a novel extension to the Alternating Directions Method of Multipliers (ADMM), called Online Adaptive ADMM (OA-ADMM), and on Model Predictive Control (MPC). OA-ADMM is tailored to online systems, where fast and adaptive real-time optimization is crucial, and allows the use of safety information about the physical system to improve safety in real-time control. We prove convergence in the static case and give requirements for online convergence. Combining OA-ADMM and MPC allows for robust decentralized motion planning and control that seamlessly integrates decentralized conflict resolution. The effectiveness of our proposed method is shown through simulations in CARLA, an open-source vehicle simulator, resulting in a reduction of 47.93% in mean added delay compared with the next best method.



قيم البحث

اقرأ أيضاً

Adaptive model predictive control (MPC) robustly ensures safety while reducing uncertainty during operation. In this paper, a distributed version is proposed to deal with network systems featuring multiple agents and limited communication. To solve t he problem in a distributed manner, structure is imposed on the control design ingredients without sacrificing performance. Decentralized and distributed adaptation schemes that allow for a reduction of the uncertainty online compatibly with the network topology are also proposed. The algorithm ensures robust constraint satisfaction, recursive feasibility and finite gain $ell_2$ stability, and yields lower closed-loop cost compared to robust distributed MPC in simulations.
Wireless sensor network has recently received much attention due to its broad applicability and ease-of-installation. This paper is concerned with a distributed state estimation problem, where all sensor nodes are required to achieve a consensus esti mation. The weighted least squares (WLS) estimator is an appealing way to handle this problem since it does not need any prior distribution information. To this end, we first exploit the equivalent relation between the information filter and WLS estimator. Then, we establish an optimization problem under the relation coupled with a consensus constraint. Finally, the consensus-based distributed WLS problem is tackled by the alternating direction method of multiplier (ADMM). Numerical simulation together with theoretical analysis testify the convergence and consensus estimations between nodes.
The cost of the power distribution infrastructures is driven by the peak power encountered in the system. Therefore, the distribution network operators consider billing consumers behind a common transformer in the function of their peak demand and le ave it to the consumers to manage their collective costs. This management problem is, however, not trivial. In this paper, we consider a multi-agent residential smart grid system, where each agent has local renewable energy production and energy storage, and all agents are connected to a local transformer. The objective is to develop an optimal policy that minimizes the economic cost consisting of both the spot-market cost for each consumer and their collective peak-power cost. We propose to use a parametric Model Predictive Control (MPC)-scheme to approximate the optimal policy. The optimality of this policy is limited by its finite horizon and inaccurate forecasts of the local power production-consumption. A Deterministic Policy Gradient (DPG) method is deployed to adjust the MPC parameters and improve the policy. Our simulations show that the proposed MPC-based Reinforcement Learning (RL) method can effectively decrease the long-term economic cost for this smart grid problem.
An autonomous adaptive MPC architecture is presented for control of heating, ventilation and air condition (HVAC) systems to maintain indoor temperature while reducing energy use. Although equipment use and occupant changes with time, existing MPC me thods are not capable of automatically relearning models and computing control decisions reliably for extended periods without intervention from a human expert. We seek to address this weakness. Two major features are embedded in the proposed architecture to enable autonomy: (i) a system identification algorithm from our prior work that periodically re-learns building dynamics and unmeasured internal heat loads from data without requiring re-tuning by experts. The estimated model is guaranteed to be stable and has desirable physical properties irrespective of the data; (ii) an MPC planner with a convex approximation of the original nonconvex problem. The planner uses a descent and convergent method, with the underlying optimization problem being feasible and convex. A year long simulation with a realistic plant shows that both of the features of the proposed architecture - periodic model and disturbance update and convexification of the planning problem - are essential to get the performance improvement over a commonly used baseline controller. Without these features, though MPC can outperform the baseline controller in certain situations, the benefits may not be substantial enough to warrant the investment in MPC.
The trade-off between optimality and complexity has been one of the most important challenges in the field of robust Model Predictive Control (MPC). To address the challenge, we propose a flexible robust MPC scheme by synergizing the multi-stage and tube-based MPC approaches. The key idea is to exploit the non-conservatism of the multi-stage MPC and the simplicity of the tube-based MPC. The proposed scheme provides two options for the user to determine the trade-off depending on the application: the choice of the robust horizon and the classification of the uncertainties. Beyond the robust horizon, the branching of the scenario-tree employed in multi-stage MPC is avoided with the help of tubes. The growth of the problem size with respect to the number of uncertainties is reduced by handling emph{small} uncertainties via an invariant tube that can be computed offline. This results in linear growth of the problem size beyond the robust horizon and no growth of the problem size concerning small magnitude uncertainties. The proposed approach helps to achieve a desired trade-off between optimality and complexity compared to existing robust MPC approaches. We show that the proposed approach is robustly asymptotically stable. Its advantages are demonstrated for a CSTR example.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا