ﻻ يوجد ملخص باللغة العربية
This paper presents a novel scalable framework to solve the optimization of a nonlinear system with differential algebraic equation (DAE) constraints that enforce the asymptotic stability of the underlying dynamic model with respect to certain disturbances. Existing solution approaches to analogous DAE-constrained problems are based on discretization of DAE system into a large set of nonlinear algebraic equations representing the time-marching schemes. These approaches are not scalable to large size models. The proposed framework, based on LaSalles invariance principle, uses convex Lyapunov functions to develop a novel stability certificate which consists of a limited number of algebraic constraints. We develop specific algorithms for two major types of nonlinearities, namely Lure, and quasi-polynomial systems. Quadratic and convex-sum-of-square Lyapunov functions are constructed for the Lure-type and quasi-polynomial systems respectively. A numerical experiment is performed on a 3-generator power network to obtain a solution for transient-stability-constrained optimal power flow.
In this paper we deal with infinite-dimensional nonlinear forward complete dynamical systems which are subject to external disturbances. We first extend the well-known Datko lemma to the framework of the considered class of systems. Thanks to this ge
We propose a framework to use Nesterovs accelerated method for constrained convex optimization problems. Our approach consists of first reformulating the original problem as an unconstrained optimization problem using a continuously differentiable ex
We consider an abstract class of infinite-dimensional dynamical systems with inputs. For this class, the significance of noncoercive Lyapunov functions is analyzed. It is shown that the existence of such Lyapunov functions implies norm-to-integral in
In this paper we present a new algorithmic realization of a projection-based scheme for general convex constrained optimization problem. The general idea is to transform the original optimization problem to a sequence of feasibility problems by itera
We propose a new randomized algorithm for solving convex optimization problems that have a large number of constraints (with high probability). Existing methods like interior-point or Newton-type algorithms are hard to apply to such problems because