No Arabic abstract
A new adaptive predictive controller for constrained linear systems is presented. The main feature of the proposed controller is the partition of the input in two components. The first part is used to persistently excite the system, in order to guarantee accurate and convergent parameter estimates in a deterministic framework. An MPC-inspired receding horizon optimization problem is developed to achieve the required excitation in a manner that is optimal for the plant. The remaining control action is employed by a conventional tube MPC controller to regulate the plant in the presence of parametric uncertainty and the excitation generated for estimation purposes. Constraint satisfaction, robust exponential stability, and convergence of the estimates are guaranteed under design conditions mildly more demanding than that of standard MPC implementations.
We present a sample-based Learning Model Predictive Controller (LMPC) for constrained uncertain linear systems subject to bounded additive disturbances. The proposed controller builds on earlier work on LMPC for deterministic systems. First, we introduce the design of the safe set and value function used to guarantee safety and performance improvement. Afterwards, we show how these quantities can be approximated using noisy historical data. The effectiveness of the proposed approach is demonstrated on a numerical example. We show that the proposed LMPC is able to safely explore the state space and to iteratively improve the worst-case closed-loop performance, while robustly satisfying state and input constraints.
Ratio control for two interacting processes is proposed with a PID feedforward design based on model predictive control (MPC) scheme. At each sampling instant, the MPC control action minimizes a state-dependent performance index associated with a PID-type state vector, thus yielding a PID-type control structure. Compared to the standard MPC formulations with separated single-variable control, such a control action allows one to take into account the non-uniformity of the two process outputs. After reformulating the MPC control law as a PID control law, we provide conditions for prediction horizon and weighting matrices so that the closed-loop control is asymptotically stable, and show the effectiveness of the approach with simulation and experiment results.
We propose a Thompson sampling-based learning algorithm for the Linear Quadratic (LQ) control problem with unknown system parameters. The algorithm is called Thompson sampling with dynamic episodes (TSDE) where two stopping criteria determine the lengths of the dynamic episodes in Thompson sampling. The first stopping criterion controls the growth rate of episode length. The second stopping criterion is triggered when the determinant of the sample covariance matrix is less than half of the previous value. We show under some conditions on the prior distribution that the expected (Bayesian) regret of TSDE accumulated up to time T is bounded by O(sqrt{T}). Here O(.) hides constants and logarithmic factors. This is the first O(sqrt{T} ) bound on expected regret of learning in LQ control. By introducing a reinitialization schedule, we also show that the algorithm is robust to time-varying drift in model parameters. Numerical simulations are provided to illustrate the performance of TSDE.
We develop a novel data-driven robust model predictive control (DDRMPC) approach for automatic control of irrigation systems. The fundamental idea is to integrate both mechanistic models, which describe dynamics in soil moisture variations, and data-driven models, which characterize uncertainty in forecast errors of evapotranspiration and precipitation, into a holistic systems control framework. To better capture the support of uncertainty distribution, we take a new learning-based approach by constructing uncertainty sets from historical data. For evapotranspiration forecast error, the support vector clustering-based uncertainty set is adopted, which can be conveniently built from historical data. As for precipitation forecast errors, we analyze the dependence of their distribution on forecast values, and further design a tailored uncertainty set based on the properties of this type of uncertainty. In this way, the overall uncertainty distribution can be elaborately described, which finally contributes to rational and efficient control decisions. To assure the quality of data-driven uncertainty sets, a training-calibration scheme is used to provide theoretical performance guarantees. A generalized affine decision rule is adopted to obtain tractable approximations of optimal control problems, thereby ensuring the practicability of DDRMPC. Case studies using real data show that, DDRMPC can reliably maintain soil moisture above the safety level and avoid crop devastation. The proposed DDRMPC approach leads to a 40% reduction of total water consumption compared to the fine-tuned open-loop control strategy. In comparison with the carefully tuned rule-based control and certainty equivalent model predictive control, the proposed DDRMPC approach can significantly reduce the total water consumption and improve the control performance.
In quantum engineering, faults may occur in a quantum control system, which will cause the quantum control system unstable or deteriorate other relevant performance of the system. This note presents an estimator-based fault-tolerant control design approach for a class of linear quantum stochastic systems subject to fault signals. In this approach, the fault signals and some commutative components of the quantum system observables are estimated, and a fault-tolerant controller is designed to compensate the effect of the fault signals. Numerical procedures are developed for controller design and an example is presented to demonstrate the proposed design approach.