Do you want to publish a course? Click here

Value Function in Maximum Hands-off Control

132   0   0.0 ( 0 )
 Added by Takuya Ikeda
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

In this brief paper, we study the value function in maximum hands-off control. Maximum hands-off control, also known as sparse control, is the L0-optimal control among the admissible controls. Although the L0 measure is discontinuous and non- convex, we prove that the value function, or the minimum L0 norm of the control, is a continuous and strictly convex function of the initial state in the reachable set, under an assumption on the controlled plant model. This property is important, in particular, for discussing the sensitivity of the optimality against uncertainties in the initial state, and also for investigating the stability by using the value function as a Lyapunov function in model predictive control.



rate research

Read More

Maximum hands-off control is a control that has the minimum L0 norm among all feasible controls. It is known that the maximum hands-off (or L0-optimal) control problem is equivalent to the L1-optimal control under the assumption of normality. In this article, we analyze the maximum hands-off control for linear time-invariant systems without the normality assumption. For this purpose, we introduce the Lp-optimal control with 0<p<1, which is a natural relaxation of the L0 problem. By using this, we investigate the existence and the bang-off-bang property (i.e. the control takes values of 1, 0 and -1) of the maximum hands-off control. We then describe a general relation between the maximum hands-off control and the L1-optimal control. We also prove the continuity and convexity property of the value function, which plays an important role to prove the stability when the (finite-horizon) control is extended to model predictive control.
We prove the continuity of the value function of the sparse optimal control problem. The sparse optimal control is a control whose support is minimum among all admissible controls. Under the normality assumption, it is known that a sparse optimal control is given by L^1 optimal control. Furthermore, the value function of the sparse optimal control problem is identical with that of the L1-optimal control problem. From these properties, we prove the continuity of the value function of the sparse optimal control problem by verifying that of the L1-optimal control problem.
For his work in the economics of climate change, Professor William Nordhaus was a co-recipient of the 2018 Nobel Memorial Prize for Economic Sciences. A core component of the work undertaken by Nordhaus is the Dynamic Integrated model of Climate and Economy, known as the DICE model. The DICE model is a discrete-time model with two control inputs and is primarily used in conjunction with a particular optimal control problem in order to estimate optimal pathways for reducing greenhouse gas emissions. In this paper, we provide a tutorial introduction to the DICE model and we indicate challenges and open problems of potential interest for the systems and control community.
We present an algorithm for controlling and scheduling multiple linear time-invariant processes on a shared bandwidth limited communication network using adaptive sampling intervals. The controller is centralized and computes at every sampling instant not only the new control command for a process, but also decides the time interval to wait until taking the next sample. The approach relies on model predictive control ideas, where the cost function penalizes the state and control effort as well as the time interval until the next sample is taken. The latter is introduced in order to generate an adaptive sampling scheme for the overall system such that the sampling time increases as the norm of the system state goes to zero. The paper presents a method for synthesizing such a predictive controller and gives explicit sufficient conditions for when it is stabilizing. Further explicit conditions are given which guarantee conflict free transmissions on the network. It is shown that the optimization problem may be solved off-line and that the controller can be implemented as a lookup table of state feedback gains. Simulation studies which compare the proposed algorithm to periodic sampling illustrate potential performance gains.
We study the new concept of relative coobservability in decentralized supervisory control of discrete-event systems under partial observation. This extends our previous work on relative observability from a centralized setup to a decentralized one. A fundamental concept in decentralized supervisory control is coobservability (and its several variations); this property is not, however, closed under set union, and hence there generally does not exist the supremal element. Our proposed relative coobservability, although stronger than coobservability, is algebraically well-behaved, and the supremal relatively coobservable sublanguage of a given language exists. We present an algorithm to compute this supremal sublanguage. Moreover, relative coobservability is weaker than conormality, which is also closed under set union; unlike conormality, relative coobservability imposes no constraint on disabling unobservable controllable events.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا