ترغب بنشر مسار تعليمي؟ اضغط هنا

Stability-constrained Optimization for Nonlinear Systems based on Convex Lyapunov Functions

85   0   0.0 ( 0 )
 نشر من قبل Qifeng Li
 تاريخ النشر 2018
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper presents a novel scalable framework to solve the optimization of a nonlinear system with differential algebraic equation (DAE) constraints that enforce the asymptotic stability of the underlying dynamic model with respect to certain disturbances. Existing solution approaches to analogous DAE-constrained problems are based on discretization of DAE system into a large set of nonlinear algebraic equations representing the time-marching schemes. These approaches are not scalable to large size models. The proposed framework, based on LaSalles invariance principle, uses convex Lyapunov functions to develop a novel stability certificate which consists of a limited number of algebraic constraints. We develop specific algorithms for two major types of nonlinearities, namely Lure, and quasi-polynomial systems. Quadratic and convex-sum-of-square Lyapunov functions are constructed for the Lure-type and quasi-polynomial systems respectively. A numerical experiment is performed on a 3-generator power network to obtain a solution for transient-stability-constrained optimal power flow.



قيم البحث

اقرأ أيضاً

In this paper we deal with infinite-dimensional nonlinear forward complete dynamical systems which are subject to external disturbances. We first extend the well-known Datko lemma to the framework of the considered class of systems. Thanks to this ge neralization, we provide characterizations of the uniform (with respect to disturbances) local, semi-global, and global exponential stability, through the existence of coercive and non-coercive Lyapunov functionals. The importance of the obtained results is underlined through some applications concerning 1) exponential stability of nonlinear retarded systems with piecewise constant delays, 2) exponential stability preservation under sampling for semilinear control switching systems, and 3) the link between input-to-state stability and exponential stability of semilinear switching systems.
We propose a framework to use Nesterovs accelerated method for constrained convex optimization problems. Our approach consists of first reformulating the original problem as an unconstrained optimization problem using a continuously differentiable ex act penalty function. This reformulation is based on replacing the Lagrange multipliers in the augmented Lagrangian of the original problem by Lagrange multiplier functions. The expressions of these Lagrange multiplier functions, which depend upon the gradients of the objective function and the constraints, can make the unconstrained penalty function non-convex in general even if the original problem is convex. We establish sufficient conditions on the objective function and the constraints of the original problem under which the unconstrained penalty function is convex. This enables us to use Nesterovs accelerated gradient method for unconstrained convex optimization and achieve a guaranteed rate of convergence which is better than the state-of-the-art first-order algorithms for constrained convex optimization. Simulations illustrate our results.
We consider an abstract class of infinite-dimensional dynamical systems with inputs. For this class, the significance of noncoercive Lyapunov functions is analyzed. It is shown that the existence of such Lyapunov functions implies norm-to-integral in put-to-state stability. This property in turn is equivalent to input-to-state stability if the system satisfies certain mild regularity assumptions. For a particular class of linear systems with unbounded admissible input operators, explicit constructions of noncoercive Lyapunov functions are provided. The theory is applied to a heat equation with Dirichlet boundary conditions.
In this paper we present a new algorithmic realization of a projection-based scheme for general convex constrained optimization problem. The general idea is to transform the original optimization problem to a sequence of feasibility problems by itera tively constraining the objective function from above until the feasibility problem is inconsistent. For each of the feasibility problems one may apply any of the existing projection methods for solving it. In particular, the scheme allows the use of subgradient projections and does not require exact projections onto the constraints sets as in existing similar methods. We also apply the newly introduced concept of superiorization to optimization formulation and compare its performance to our scheme. We provide some numerical results for convex quadratic test problems as well as for real-life optimization problems coming from medical treatment planning.
We propose a new randomized algorithm for solving convex optimization problems that have a large number of constraints (with high probability). Existing methods like interior-point or Newton-type algorithms are hard to apply to such problems because they have expensive computation and storage requirements for Hessians and matrix
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا