Do you want to publish a course? Click here

Data-Driven Scenario Optimization for Automated Controller Tuning with Probabilistic Performance Guarantees

538   0   0.0 ( 0 )
 Added by Joel Paulson
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Systematic design and verification of advanced control strategies for complex systems under uncertainty largely remains an open problem. Despite the promise of blackbox optimization methods for automated controller tuning, they generally lack formal guarantees on the solution quality, which is especially important in the control of safety-critical systems. This paper focuses on obtaining closed-loop performance guarantees for automated controller tuning, which can be formulated as a black-box optimization problem under uncertainty. We use recent advances in non-convex scenario theory to provide a distribution-free bound on the probability of the closed-loop performance measures. To mitigate the computational complexity of the data-driven scenario optimization method, we restrict ourselves to a discrete set of candidate tuning parameters. We propose to generate these candidates using constrained Bayesian optimization run multiple times from different random seed points. We apply the proposed method for tuning an economic nonlinear model predictive controller for a semibatch reactor modeled by seven highly nonlinear differential equations.



rate research

Read More

Real-time vehicle dispatching operations in traditional car-sharing systems is an already computationally challenging scheduling problem. Electrification only exacerbates the computational difficulties as charge level constraints come into play. To overcome this complexity, we employ an online minimum drift plus penalty (MDPP) approach for SAEV systems that (i) does not require a priori knowledge of customer arrival rates to the different parts of the system (i.e. it is practical from a real-world deployment perspective), (ii) ensures the stability of customer waiting times, (iii) ensures that the deviation of dispatch costs from a desirable dispatch cost can be controlled, and (iv) has a computational time-complexity that allows for real-time implementation. Using an agent-based simulator developed for SAEV systems, we test the MDPP approach under two scenarios with real-world calibrated demand and charger distributions: 1) a low-demand scenario with long trips, and 2) a high-demand scenario with short trips. The comparisons with other algorithms under both scenarios show that the proposed online MDPP outperforms all other algorithms in terms of both reduced customer waiting times and vehicle dispatching costs.
A probabilistic performance-oriented controller design approach based on polynomial chaos expansion and optimization is proposed for flight dynamic systems. Unlike robust control techniques where uncertainties are conservatively handled, the proposed method aims at propagating uncertainties effectively and optimizing control parameters to satisfy the probabilistic requirements directly. To achieve this, the sensitivities of violation probabilities are evaluated by the expansion coefficients and the fourth moment method for reliability analysis, after which an optimization that minimizes failure probability under chance constraints is conducted. Afterward, a time-dependent polynomial chaos expansion is performed to validate the results. With this approach, the failure probability is reduced while guaranteeing the closed-loop performance, thus increasing the safety margin. Simulations are carried out on a longitudinal model subject to uncertain parameters to demonstrate the effectiveness of this approach.
The closed-loop performance of model predictive controllers (MPCs) is sensitive to the choice of prediction models, controller formulation, and tuning parameters. However, prediction models are typically optimized for prediction accuracy instead of performance, and MPC tuning is typically done manually to satisfy (probabilistic) constraints. In this work, we demonstrate a general approach for automating the tuning of MPC under uncertainty. In particular, we formulate the automated tuning problem as a constrained black-box optimization problem that can be tackled with derivative-free optimization. We rely on a constrained variant of Bayesian optimization (BO) to solve the MPC tuning problem that can directly handle noisy and expensive-to-evaluate functions. The benefits of the proposed automated tuning approach are demonstrated on a benchmark continuously stirred tank reactor example.
We introduce a general framework for robust data-enabled predictive control (DeePC) for linear time-invariant (LTI) systems. The proposed framework enables us to obtain model-free optimal control for LTI systems based on noisy input/output data. More specifically, robust DeePC solves a min-max optimization problem to compute the optimal control sequence that is resilient to all possible realizations of the uncertainties in the input/output data within a prescribed uncertainty set. We present computationally tractable reformulations of the min-max problem with various uncertainty sets. Furthermore, we show that even though an accurate prediction of the future behavior is unattainable in practice due to inaccessibility of the perfect input/output data, the obtained robust optimal control sequence provides performance guarantees for the actually realized input/output cost. We further show that the robust DeePC generalizes and robustifies the regularized DeePC (with quadratic regularization or 1-norm regularization) proposed in the literature. Finally, we demonstrate the performance of the proposed robust DeePC algorithm on high-fidelity, nonlinear, and noisy simulations of a grid-connected power converter system.
We propose a new framework to solve online optimization and learning problems in unknown and uncertain dynamical environments. This framework enables us to simultaneously learn the uncertain dynamical environment while making online decisions in a quantifiably robust manner. The main technical approach relies on the theory of distributional robust optimization that leverages adaptive probabilistic ambiguity sets. However, as defined, the ambiguity set usually leads to online intractable problems, and the first part of our work is directed to find reformulations in the form of online convex problems for two sub-classes of objective functions. To solve the resulting problems in the proposed framework, we further introduce an online version of the Nesterov accelerated-gradient algorithm. We determine how the proposed solution system achieves a probabilistic regret bound under certain conditions. Two applications illustrate the applicability of the proposed framework.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا