ترغب بنشر مسار تعليمي؟ اضغط هنا

Hamiltonian-Based Algorithm for Optimal Control

96   0   0.0 ( 0 )
 نشر من قبل Yorai Wardi
 تاريخ النشر 2016
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper proposes an algorithmic technique for a class of optimal control problems where it is easy to compute a pointwise minimizer of the Hamiltonian associated with every applied control. The algorithm operates in the space of relaxed controls and projects the final result into the space of ordinary controls. It is based on the descent direction from a given relaxed control towards a pointwise minimizer of the Hamiltonian. This direction comprises a form of gradient projection and for some systems, is argued to have computational advantages over direct gradient directions. The algorithm is shown to be applicable to a class of hybrid optimal control problems. The theoretical results, concerning convergence of the algorithm, are corroborated by simulation examples on switched-mode hybrid systems as well as on a problem of balancing transmission- and motion energy in a mobile robotic system.



قيم البحث

اقرأ أيضاً

115 - Yorai Wardi , Magnus Egerstedt , 2016
This paper concerns a first-order algorithmic technique for a class of optimal control problems defined on switched-mode hybrid systems. The salient feature of the algorithm is that it avoids the computation of Frechet or G^ateaux derivatives of the cost functional, which can be time consuming, but rather moves in a projected-gradient direction that is easily computable (for a class of problems) and does not require any explicit derivatives. The algorithm is applicable to a class of problems where a pointwise minimizer of the Hamiltonian is computable by a simple formula, and this includes many problems that arise in theory and applications. The natural setting for the algorithm is the space of continuous-time relaxed controls, whose special structure renders the analysis simpler than the setting of ordinary controls. While the space of relaxed controls has theoretical advantages, its elements are abstract entities that may not be amenable to computation. Therefore, a key feature of the algorithm is that it computes adequate approximations to relaxed controls without loosing its theoretical convergence properties. Simulation results, including cpu times, support the theoretical developments.
We present a time-parallelization method that enables to accelerate the computation of quantum optimal control algorithms. We show that this approach is approximately fully efficient when based on a gradient method as optimization solver: the computa tional time is approximately divided by the number of available processors. The control of spin systems, molecular orientation and Bose-Einstein condensates are used as illustrative examples to highlight the wide range of application of this numerical scheme.
126 - Qing Hui , Zhenyi Liu 2014
In this report, we present a new Linear-Quadratic Semistabilizers (LQS) theory for linear network systems. This new semistable H2 control framework is developed to address the robust and optimal semistable control issues of network systems while pres erving network topology subject to white noise. Two new notions of semistabilizability and semicontrollability are introduced as a means to connecting semistability with the Lyapunov equation based technique. With these new notions, we first develop a semistable H2 control theory for network systems by exploiting the properties of semistability. A new series of necessary and sufficient conditions for semistability of the closed-loop system have been derived in terms of the Lyapunov equation. Based on these results, we propose a constrained optimization technique to solve the semistable H2 network-topology-preserving control design for network systems over an admissible set. Then optimization analysis and the development of numerical algorithms for the obtained constrained optimization problem are conducted. We establish the existence of optimal solutions for the obtained nonconvex optimization problem over some admissible set. Next, we propose a heuristic swarm optimization based numerical algorithm towards efficiently solving this nonconvex, nonlinear optimization problem. Finally, several numerical examples will be provided.
This paper presents a new fast and robust algorithm that provides fuel-optimal impulsive control input sequences that drive a linear time-variant system to a desired state at a specified time. This algorithm is applicable to a broad class of problems where the cost is expressed as a time-varying norm-like function of the control input, enabling inclusion of complex operational constraints in the control planning problem. First, it is shown that the reachable sets for this problem have identical properties to those in prior works using constant cost functions, enabling use of existing algorithms in conjunction with newly derived contact and support functions. By reformulating the optimal control problem as a semi-infinite convex program, it is also demonstrated that the time-invariant component of the commonly studied primer vector is an outward normal vector to the reachable set at the target state. Using this formulation, a fast and robust algorithm that provides globally optimal impulsive control input sequences is proposed. The algorithm iteratively refines estimates of an outward normal vector to the reachable set at the target state and a minimal set of control input times until the optimality criteria are satisfied to within a user-specified tolerance. Next, optimal control inputs are computed by solving a quadratic program. The algorithm is validated through simulations of challenging example problems based on the recently proposed Miniaturized Distributed Occulter/Telescope small satellite mission, which demonstrate that the proposed algorithm converges several times faster than comparable algorithms in literature.
In many applications, and in systems/synthetic biology, in particular, it is desirable to compute control policies that force the trajectory of a bistable system from one equilibrium (the initial point) to another equilibrium (the target point), or i n other words to solve the switching problem. It was recently shown that, for monotone bistable systems, this problem admits easy-to-implement open-loop solutions in terms of temporal pulses (i.e., step functions of fixed length and fixed magnitude). In this paper, we develop this idea further and formulate a problem of convergence to an equilibrium from an arbitrary initial point. We show that this problem can be solved using a static optimization problem in the case of monotone systems. Changing the initial point to an arbitrary state allows to build closed-loop, event-based or open-loop policies for the switching/convergence problems. In our derivations we exploit the Koopman operator, which offers a linear infinite-dimensional representation of an autonomous nonlinear system. One of the main advantages of using the Koopman operator is the powerful computational tools developed for this framework. Besides the presence of numerical solutions, the switching/convergence problem can also serve as a building block for solving more complicated control problems and can potentially be applied to non-monotone systems. We illustrate this argument on the problem of synchronizing cardiac cells by defibrillation. Potentially, our approach can be extended to problems with different parametrizations of control signals since the only fundamental limitation is the finite time application of the control signal.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا