Do you want to publish a course? Click here

Exponential stability for time-delay neural networks via new weighted integral inequalities

146   0   0.0 ( 0 )
 Added by Chenyang Shi
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

We study exponential stability for a kind of neural networks having time-varying delay. By extending the auxiliary function-based integral inequality, a novel integral inequality is derived by using weighted orthogonal functions of which one is discontinuous. Then, the new inequality is applied to investigate the exponential stability of time-delay neural networks via Lyapunov-Krasovskii functional (LKF) method. Numerical examples are given to verify the advantages of the proposed criterion.



rate research

Read More

Many large-scale and distributed optimization problems can be brought into a composite form in which the objective function is given by the sum of a smooth term and a nonsmooth regularizer. Such problems can be solved via a proximal gradient method and its variants, thereby generalizing gradient descent to a nonsmooth setup. In this paper, we view proximal algorithms as dynamical systems and leverage techniques from control theory to study their global properties. In particular, for problems with strongly convex objective functions, we utilize the theory of integral quadratic constraints to prove the global exponential stability of the equilibrium points of the differential equations that govern the evolution of proximal gradient and Douglas-Rachford splitting flows. In our analysis, we use the fact that these algorithms can be interpreted as variable-metric gradient methods on the suitable envelopes and exploit structural properties of the nonlinear terms that arise from the gradient of the smooth part of the objective function and the proximal operator associated with the nonsmooth regularizer. We also demonstrate that these envelopes can be obtained from the augmented Lagrangian associated with the original nonsmooth problem and establish conditions for global exponential convergence even in the absence of strong convexity.
92 - Oran Gannot 2019
We discuss some frequency-domain criteria for the exponential stability of nonlinear feedback systems based on dissipativity theory. Applications are given to convergence rates for certain perturbations of the damped harmonic oscillator.
73 - Ling Zhang , Xiaoqi Sun 2021
In this paper, a kind of neural network with time-varying delays is proposed to solve the problems of quadratic programming. The delay term of the neural network changes with time t. The number of neurons in the neural network is n + h, so the structure is more concise. The equilibrium point of the neural network is consistent with the optimal solution of the original optimization problem. The existence and uniqueness of the equilibrium point of the neural network are proved. Application inequality technique proved global exponential stability of the network. Some numerical examples are given to show that the proposed neural network model has good performance for solving optimization problems.
In this paper, we propose two new solution schemes to solve the stochastic strongly monotone variational inequality problems: the stochastic extra-point solution scheme and the stochastic extra-momentum solution scheme. The first one is a general scheme based on updating the iterative sequence and an auxiliary extra-point sequence. In the case of deterministic VI model, this approach includes several state-of-the-art first-order methods as its special cases. The second scheme combines two momentum-based directions: the so-called heavy-ball direction and the optimism direction, where only one projection per iteration is required in its updating process. We show that, if the variance of the stochastic oracle is appropriately controlled, then both schemes can be made to achieve optimal iteration complexity of $mathcal{O}left(kappalnleft(frac{1}{epsilon}right)right)$ to reach an $epsilon$-solution for a strongly monotone VI problem with condition number $kappa$. We show that these methods can be readily incorporated in a zeroth-order approach to solve stochastic minimax saddle-point problems, where only noisy and biased samples of the objective can be obtained, with a total sample complexity of $mathcal{O}left(frac{kappa^2}{epsilon}lnleft(frac{1}{epsilon}right)right)$
Control of complex systems involves both system identification and controller design. Deep neural networks have proven to be successful in many identification tasks, however, from model-based control perspective, these networks are difficult to work with because they are typically nonlinear and nonconvex. Therefore many systems are still identified and controlled based on simple linear models despite their poor representation capability. In this paper we bridge the gap between model accuracy and control tractability faced by neural networks, by explicitly constructing networks that are convex with respect to their inputs. We show that these input convex networks can be trained to obtain accurate models of complex physical systems. In particular, we design input convex recurrent neural networks to capture temporal behavior of dynamical systems. Then optimal controllers can be achieved via solving a convex model predictive control problem. Experiment results demonstrate the good potential of the proposed input convex neural network based approach in a variety of control applications. In particular we show that in the MuJoCo locomotion tasks, we could achieve over 10% higher performance using 5* less time compared with state-of-the-art model-based reinforcement learning method; and in the building HVAC control example, our method achieved up to 20% energy reduction compared with classic linear models.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا