No Arabic abstract
We prove the continuity of the value function of the sparse optimal control problem. The sparse optimal control is a control whose support is minimum among all admissible controls. Under the normality assumption, it is known that a sparse optimal control is given by L^1 optimal control. Furthermore, the value function of the sparse optimal control problem is identical with that of the L1-optimal control problem. From these properties, we prove the continuity of the value function of the sparse optimal control problem by verifying that of the L1-optimal control problem.
In this brief paper, we study the value function in maximum hands-off control. Maximum hands-off control, also known as sparse control, is the L0-optimal control among the admissible controls. Although the L0 measure is discontinuous and non- convex, we prove that the value function, or the minimum L0 norm of the control, is a continuous and strictly convex function of the initial state in the reachable set, under an assumption on the controlled plant model. This property is important, in particular, for discussing the sensitivity of the optimality against uncertainties in the initial state, and also for investigating the stability by using the value function as a Lyapunov function in model predictive control.
For his work in the economics of climate change, Professor William Nordhaus was a co-recipient of the 2018 Nobel Memorial Prize for Economic Sciences. A core component of the work undertaken by Nordhaus is the Dynamic Integrated model of Climate and Economy, known as the DICE model. The DICE model is a discrete-time model with two control inputs and is primarily used in conjunction with a particular optimal control problem in order to estimate optimal pathways for reducing greenhouse gas emissions. In this paper, we provide a tutorial introduction to the DICE model and we indicate challenges and open problems of potential interest for the systems and control community.
Flexible loads, e.g. thermostatically controlled loads (TCLs), are technically feasible to participate in demand response (DR) programs. On the other hand, there is a number of challenges that need to be resolved before it can be implemented in practice en masse. First, individual TCLs must be aggregated and operated in sync to scale DR benefits. Second, the uncertainty of TCLs needs to be accounted for. Third, exercising the flexibility of TCLs needs to be coordinated with distribution system operations to avoid unnecessary power losses and compliance with power flow and voltage limits. This paper addresses these challenges. We propose a network-constrained, open-loop, stochastic optimal control formulation. The first part of this formulation represents ensembles of collocated TCLs modelled by an aggregated Markov Process (MP), where each MP state is associated with a given power consumption or production level. The second part extends MPs to a multi-period distribution power flow optimization. In this optimization, the control of TCL ensembles is regulated by transition probability matrices and physically enabled by local active and reactive power controls at TCL locations. The optimization is solved with a Spatio-Temporal Dual Decomposition (ST-D2) algorithm. The performance of the proposed formulation and algorithm is demonstrated on the IEEE 33-bus distribution model.
This article treats two problems dealing with control of linear systems in the presence of a jammer that can sporadically turn off the control signal. The first problem treats the standard reachability problem, and the second treats the standard linear quadratic regulator problem under the above class of jamming signals. We provide necessary and sufficient conditions for optimality based on a nonsmooth Pontryagin maximum principle.
The paper studies approximations and control of a processor sharing (PS) server where the service rate depends on the number of jobs occupying the server. The control of such a system is implemented by imposing a limit on the number of jobs that can share the server concurrently, with the rest of the jobs waiting in a first-in-first-out (FIFO) buffer. A desirable control scheme should strike the right balance between efficiency (operating at a high service rate) and parallelism (preventing small jobs from getting stuck behind large ones). We employ the framework of heavy-traffic diffusion analysis to devise near optimal control heuristics for such a queueing system. However, while the literature on diffusion control of state-dependent queueing systems begins with a sequence of systems and an exogenously defined drift function, we begin with a finite discrete PS server and propose an axiomatic recipe to explicitly construct a sequence of state-dependent PS servers which then yields a drift function. We establish diffusion approximations and use them to obtain insightful and closed-form approximations for the original system under a static concurrency limit control policy. We extend our study to control policies that dynamically adjust the concurrency limit. We provide two novel numerical algorithms to solve the associated diffusion control problem. Our algorithms can be viewed as average cost iteration: The first algorithm uses binary-search on the average cost and can find an $epsilon$-optimal policy in time $Oleft( log^2 frac{1}{epsilon} right)$; the second algorithm uses the Newton-Raphson method for root-finding and requires $Oleft( log frac{1}{epsilon} loglog frac{1}{epsilon}right)$ time. Numerical experiments demonstrate the accuracy of our approximation for choosing optimal or near-optimal static and dynamic concurrency control heuristics.