ترغب بنشر مسار تعليمي؟ اضغط هنا

Stability Analysis of Time-varying Delay Neural Network for Convex Quadratic Programming With Equality Constraints and Inequality Constraints

74   0   0.0 ( 0 )
 نشر من قبل Ling Zhang
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, a kind of neural network with time-varying delays is proposed to solve the problems of quadratic programming. The delay term of the neural network changes with time t. The number of neurons in the neural network is n + h, so the structure is more concise. The equilibrium point of the neural network is consistent with the optimal solution of the original optimization problem. The existence and uniqueness of the equilibrium point of the neural network are proved. Application inequality technique proved global exponential stability of the network. Some numerical examples are given to show that the proposed neural network model has good performance for solving optimization problems.

قيم البحث

اقرأ أيضاً

A computationally efficient method to solve non-convex programming problems with linear equality constraints is presented. The proposed method is based on a recursively feasible and descending sequential convex programming procedure proven to converg e to a locally optimal solution. Assuming that the first convex problem in the sequence is feasible, these properties are obtained by convexifying the non-convex cost and inequality constraints with inner-convex approximations. Additionally, a computationally efficient method is introduced to obtain inner-convex approximations based on Taylor series expansions. These Taylor-based inner-convex approximations provide the overall algorithm with a quadratic rate of convergence. The proposed method is capable of solving problems of practical interest in real-time. This is illustrated with a numerical simulation of an aerial vehicle trajectory optimization problem on commercial-of-the-shelf embedded computers.
190 - Chuanye Gu , Zhiyou Wu , Jueyou Li 2018
This paper considers a distributed convex optimization problem over a time-varying multi-agent network, where each agent has its own decision variables that should be set so as to minimize its individual objective subject to local constraints and glo bal coupling equality constraints. Over directed graphs, a distributed algorithm is proposed that incorporates the push-sum protocol into dual subgradient methods. Under the convexity assumption, the optimality of primal and dual variables, and constraint violations is first established. Then the explicit convergence rates of the proposed algorithm are obtained. Finally, some numerical experiments on the economic dispatch problem are provided to demonstrate the efficacy of the proposed algorithm.
In the paper we provide new conditions ensuring the isolated calmness property and the Aubin property of parameterized variational systems with constraints depending, apart from the parameter, also on the solution itself. Such systems include, e.g., quasi-variational inequalities and implicit complementarity problems. Concerning the Aubin property, possible restrictions imposed on the parameter are also admitted. Throughout the paper, tools from the directional limiting generalized differential calculus are employed enabling us to impose only rather weak (non-restrictive) qualification conditions. Despite the very general problem setting, the resulting conditions are workable as documented by some academic examples
We propose a method to impose homogeneous linear inequality constraints of the form $Axleq 0$ on neural network activations. The proposed method allows a data-driven training approach to be combined with modeling prior knowledge about the task. One w ay to achieve this task is by means of a projection step at test time after unconstrained training. However, this is an expensive operation. By directly incorporating the constraints into the architecture, we can significantly speed-up inference at test time; for instance, our experiments show a speed-up of up to two orders of magnitude over a projection method. Our algorithm computes a suitable parameterization of the feasible set at initialization and uses standard variants of stochastic gradient descent to find solutions to the constrained network. Thus, the modeling constraints are always satisfied during training. Crucially, our approach avoids to solve an optimization problem at each training step or to manually trade-off data and constraint fidelity with additional hyperparameters. We consider constrained generative modeling as an important application domain and experimentally demonstrate the proposed method by constraining a variational autoencoder.
Abstracting neural networks with constraints they impose on their inputs and outputs can be very useful in the analysis of neural network classifiers and to derive optimization-based algorithms for certification of stability and robustness of feedbac k systems involving neural networks. In this paper, we propose a convex program, in the form of a Linear Matrix Inequality (LMI), to certify incremental quadratic constraints on the map of neural networks over a region of interest. These certificates can capture several useful properties such as (local) Lipschitz continuity, one-sided Lipschitz continuity, invertibility, and contraction. We illustrate the utility of our approach in two different settings. First, we develop a semidefinite program to compute guaranteed and sharp upper bounds on the local Lipschitz constant of neural networks and illustrate the results on random networks as well as networks trained on MNIST. Second, we consider a linear time-invariant system in feedback with an approximate model predictive controller parameterized by a neural network. We then turn the stability analysis into a semidefinite feasibility program and estimate an ellipsoidal invariant set for the closed-loop system.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا