ﻻ يوجد ملخص باللغة العربية
This paper characterizes the solution to a finite horizon min-max optimal control problem where the system is linear and discrete-time with control and state constraints, and the cost quadratic; the disturbance is negatively costed, as in the standard H-infinity problem, and is constrained. The cost is minimized over control policies and maximized over disturbance sequences so that the solution yields a feedback control. It is shown that the value function is piecewise quadratic and the optimal control policy piecewise affine, being quadratic and affine, respectively, in polytopes that partition the domain of the value function.
This paper deals with the distributed $mathcal{H}_2$ optimal control problem for linear multi-agent systems. In particular, we consider a suboptimal version of the distributed $mathcal{H}_2$ optimal control problem. Given a linear multi-agent system
Multistage risk-averse optimal control problems with nested conditional risk mappings are gaining popularity in various application domains. Risk-averse formulations interpolate between the classical expectation-based stochastic and minimax optimal c
In this paper, we obtain several structural results for the value function associated to a mean-field optimal control problem of Bolza type in the space of measures. After establishing the sensitivity relations bridging between the costates of the ma
This paper applies a reinforcement learning (RL) method to solve infinite horizon continuous-time stochastic linear quadratic problems, where drift and diffusion terms in the dynamics may depend on both the state and control. Based on Bellmans dynami
This paper studies the problem of steering a linear time-invariant system subject to state and input constraints towards a goal location that may be inferred only through partial observations. We assume mixed-observable settings, where the systems st