ﻻ يوجد ملخص باللغة العربية
We study the impact of predictions in online Linear Quadratic Regulator control with both stochastic and adversarial disturbances in the dynamics. In both settings, we characterize the optimal policy and derive tight bounds on the minimum cost and dynamic regret. Perhaps surprisingly, our analysis shows that the conventional greedy MPC approach is a near-optimal policy in both stochastic and adversarial settings. Specifically, for length-$T$ problems, MPC requires only $O(log T)$ predictions to reach $O(1)$ dynamic regret, which matches (up to lower-order terms) our lower bound on the required prediction horizon for constant regret.
Robust control is a core approach for controlling systems with performance guarantees that are robust to modeling error, and is widely used in real-world systems. However, current robust control approaches can only handle small system uncertainty, an
Deriving fast and effectively coordinated control actions remains a grand challenge affecting the secure and economic operation of todays large-scale power grid. This paper presents a novel artificial intelligence (AI) based methodology to achieve mu
In this paper, we study the dynamic regret of online linear quadratic regulator (LQR) control with time-varying cost functions and disturbances. We consider the case where a finite look-ahead window of cost functions and disturbances is available at
Optimal power flow (OPF) is the fundamental mathematical model in power system operations. Improving the solution quality of OPF provide huge economic and engineering benefits. The convex reformulation of the original nonconvex alternating current OP
We propose a framework for integrating optimal power flow (OPF) with state estimation (SE) in the loop for distribution networks. Our approach combines a primal-dual gradient-based OPF solver with a SE feedback loop based on a limited set of sensors