ترغب بنشر مسار تعليمي؟ اضغط هنا

Characterization of the solution to a constrained H-infinity optimal control problem

56   0   0.0 ( 0 )
 نشر من قبل Eric Kerrigan
 تاريخ النشر 2005
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper characterizes the solution to a finite horizon min-max optimal control problem where the system is linear and discrete-time with control and state constraints, and the cost quadratic; the disturbance is negatively costed, as in the standard H-infinity problem, and is constrained. The cost is minimized over control policies and maximized over disturbance sequences so that the solution yields a feedback control. It is shown that the value function is piecewise quadratic and the optimal control policy piecewise affine, being quadratic and affine, respectively, in polytopes that partition the domain of the value function.



قيم البحث

اقرأ أيضاً

This paper deals with the distributed $mathcal{H}_2$ optimal control problem for linear multi-agent systems. In particular, we consider a suboptimal version of the distributed $mathcal{H}_2$ optimal control problem. Given a linear multi-agent system with identical agent dynamics and an associated $mathcal{H}_2$ cost functional, our aim is to design a distributed diffusive static protocol such that the protocol achieves state synchronization for the controlled network and such that the associated cost is smaller than an a priori given upper bound. We first analyze the $mathcal{H}_2$ performance of linear systems and then apply the results to linear multi-agent systems. Two design methods are provided to compute such a suboptimal distributed protocol. For each method, the expression for the local control gain involves a solution of a single Riccati inequality of dimension equal to the dimension of the individual agent dynamics, and the smallest nonzero and the largest eigenvalue of the graph Laplacian.
Multistage risk-averse optimal control problems with nested conditional risk mappings are gaining popularity in various application domains. Risk-averse formulations interpolate between the classical expectation-based stochastic and minimax optimal c ontrol. This way, risk-averse problems aim at hedging against extreme low-probability events without being overly conservative. At the same time, risk-based constraints may be employed either as surrogates for chance (probabilistic) constraints or as a robustification of expectation-based constraints. Such multistage problems, however, have been identified as particularly hard to solve. We propose a decomposition method for such nested problems that allows us to solve them via efficient numerical optimization methods. Alongside, we propose a new form of risk constraints which accounts for the propagation of uncertainty in time.
In this paper, we obtain several structural results for the value function associated to a mean-field optimal control problem of Bolza type in the space of measures. After establishing the sensitivity relations bridging between the costates of the ma ximum principle and metric superdifferentials of the value function, we investigate semiconcavity properties of this latter with respect to both variables. We then characterise optimal trajectories using set-valued feedback mappings defined in terms of suitable directional derivatives of the value function.
100 - Na Li , Xun Li , Jing Peng 2020
This paper applies a reinforcement learning (RL) method to solve infinite horizon continuous-time stochastic linear quadratic problems, where drift and diffusion terms in the dynamics may depend on both the state and control. Based on Bellmans dynami c programming principle, an online RL algorithm is presented to attain the optimal control with just partial system information. This algorithm directly computes the optimal control rather than estimating the system coefficients and solving the related Riccati equation. It just requires local trajectory information, greatly simplifying the calculation processing. Two numerical examples are carried out to shed light on our theoretical findings.
This paper studies the problem of steering a linear time-invariant system subject to state and input constraints towards a goal location that may be inferred only through partial observations. We assume mixed-observable settings, where the systems st ate is fully observable and the environments state defining the goal location is only partially observed. In these settings, the planning problem is an infinite-dimensional optimization problem where the objective is to minimize the expected cost. We show how to reformulate the control problem as a finite-dimensional deterministic problem by optimizing over a trajectory tree. Leveraging this result, we demonstrate that when the environment is static, the observation model piecewise, and cost function convex, the original control problem can be reformulated as a Mixed-Integer Convex Program (MICP) that can be solved to global optimality using a branch-and-bound algorithm. The effectiveness of the proposed approach is demonstrated on navigation tasks, where the system has to reach a goal location identified from partial observations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا