Do you want to publish a course? Click here

Fixed-Time Extremum Seeking

98   0   0.0 ( 0 )
 Added by Jorge I. Poveda
 Publication date 2019
  fields
and research's language is English




Ask ChatGPT about the research

We introduce a new class of extremum seeking controllers able to achieve fixed time convergence to the solution of optimization problems defined by static and dynamical systems. Unlike existing approaches in the literature, the convergence time of the proposed algorithms does not depend on the initial conditions and it can be prescribed a priori by tuning the parameters of the controller. Specifically, our first contribution is a novel gradient-based extremum seeking algorithm for cost functions that satisfy the Polyak-Lojasiewicz (PL) inequality with some coefficient kappa > 0, and for which the extremum seeking controller guarantees a fixed upper bound on the convergence time that is independent of the initial conditions but dependent on the coefficient kappa. Second, in order to remove the dependence on kappa, we introduce a novel Newton-based extremum seeking algorithm that guarantees a fully assignable fixed upper bound on the convergence time, thus paralleling existing asymptotic results in Newton-based extremum seeking where the rate of convergence is fully assignable. Finally, we study the problem of optimizing dynamical systems, where the cost function corresponds to the steady-state input-to-output map of a stable but unknown dynamical system. In this case, after a time scale transformation is performed, the proposed extremum seeking controllers achieve the same fixed upper bound on the convergence time as in the static case. Our results exploit recent gradient flow structures proposed by Garg and Panagou in [3], and are established by using averaging theory and singular perturbation theory for dynamical systems that are not necessarily Lipschitz continuous. We confirm the validity of our results via numerical simulations that illustrate the key advantages of the extremum seeking controllers presented in this paper.



rate research

Read More

In this paper, we present a novel Newton-based extremum seeking controller for the solution of multivariable model-free optimization problems in static maps. Unlike existing asymptotic and fixed-time results in the literature, we present a scheme that achieves (practical) fixed time convergence to a neighborhood of the optimal point, with a convergence time that is independent of the initial conditions and the Hessian of the cost function, and therefore can be arbitrarily assigned a priori by the designer via an appropriate choice of parameters in the algorithm. The extremum seeking dynamics exploit a class of fixed time convergence properties recently established in the literature for a family of Newton flows, as well as averaging results for perturbed dynamical systems that are not necessarily Lipschitz continuous. The proposed extremum seeking algorithm is model-free and does not require any explicit knowledge of the gradient and Hessian of the cost function. Instead, real-time optimization with fixed-time convergence is achieved by using real time measurements of the cost, which is perturbed by a suitable class of periodic excitation signals generated by a dynamic oscillator. Numerical examples illustrate the performance of the algorithm.
We introduce a novel class of Nash equilibrium seeking dynamics for non-cooperative games with a finite number of players, where the convergence to the Nash equilibrium is bounded by a KL function with a settling time that can be upper bounded by a positive constant that is independent of the initial conditions of the players, and which can be prescribed a priori by the system designer. The dynamics are model-free, in the sense that the mathematical forms of the cost functions of the players are unknown. Instead, in order to update its own action, each player needs to have access only to real-time evaluations of its own cost, as well as to auxiliary states of neighboring players characterized by a communication graph. Stability and convergence properties are established for both potential games and strongly monotone games. Numerical examples are presented to illustrate our theoretical results.
This paper studies the extremum seeking control (ESC) problem for a class of constrained nonlinear systems. Specifically, we focus on a family of constraints allowing to reformulate the original nonlinear system in the so-called input-output normal form. To steer the system to optimize a performance function without knowing its explicit form, we propose a novel numerical optimization-based extremum seeking control (NOESC) design consisting of a constrained numerical optimization method and an inversion based feedforward controller. In particular, a projected gradient descent algorithm is exploited to produce the state sequence to optimize the performance function, whereas a suitable boundary value problem accommodates the finite-time state transition between each two consecutive points of the state sequence. Compared to available NOESC methods, the proposed approach i) can explicitly deal with output constraints; ii) the performance function can consider a direct dependence on the states of the internal dynamics; iii) the internal dynamics do not have to be necessarily stable. The effectiveness of the proposed ESC scheme is shown through extensive numerical simulations.
In this paper we consider the problem of finding a Nash equilibrium (NE) via zeroth-order feedback information in games with merely monotone pseudogradient mapping. Based on hybrid system theory, we propose a novel extremum seeking algorithm which converges to the set of Nash equilibria in a semi-global practical sense. Finally, we present two simulation examples. The first shows that the standard extremum seeking algorithm fails, while ours succeeds in reaching NE. In the second, we simulate an allocation problem with fixed demand.
71 - Jaime A. Moreno 2020
Differentiation is an important task in control, observation and fault detection. Levants differentiator is unique, since it is able to estimate exactly and robustly the derivatives of a signal with a bounded high-order derivative. However, the convergence time, although finite, grows unboundedly with the norm of the initial differentiation error, making it uncertain when the estimated derivative is exact. In this paper we propose an extension of Levants differentiator so that the worst case convergence time can be arbitrarily assigned independently of the initial condition, i.e. the estimation converges in emph{Fixed-Time}. We propose also a family of continuous differentiators and provide a unified Lyapunov framework for analysis and design.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا