Do you want to publish a course? Click here

$mathcal{H}_infty$ Network Optimization for Edge Consensus

52   0   0.0 ( 0 )
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

This paper examines the $mathcal{H}_infty$ performance problem of the edge agreement protocol for networks of agents operating on independent time scales, connected by weighted edges, and corrupted by exogenous disturbances. $mathcal{H}_infty$-norm expressions and bounds are computed that are then used to derive new insights on network performance in terms of the effect of time scales and edge weights on disturbance rejection. We use our bounds to formulate a convex optimization problem for time scale and edge weight selection. Numerical examples are given to illustrate the applicability of the derived $mathcal{H}_infty$-norm bound expressions, and the optimization paradigm is illustrated via a formation control example involving non-homogeneous agents.



rate research

Read More

While many distributed optimization algorithms have been proposed for solving smooth or convex problems over the networks, few of them can handle non-convex and non-smooth problems. Based on a proximal primal-dual approach, this paper presents a new (stochastic) distributed algorithm with Nesterov momentum for accelerated optimization of non-convex and non-smooth problems. Theoretically, we show that the proposed algorithm can achieve an $epsilon$-stationary solution under a constant step size with $mathcal{O}(1/epsilon^2)$ computation complexity and $mathcal{O}(1/epsilon)$ communication complexity. When compared to the existing gradient tracking based methods, the proposed algorithm has the same order of computation complexity but lower order of communication complexity. To the best of our knowledge, the presented result is the first stochastic algorithm with the $mathcal{O}(1/epsilon)$ communication complexity for non-convex and non-smooth problems. Numerical experiments for a distributed non-convex regression problem and a deep neural network based classification problem are presented to illustrate the effectiveness of the proposed algorithms.
This technical note proposes the decentralized-partial-consensus optimization with inequality constraints, and a continuous-time algorithm based on multiple interconnected recurrent neural networks (RNNs) is derived to solve the obtained optimization problems. First, the partial-consensus matrix originating from Laplacian matrix is constructed to tackle the partial-consensus constraints. In addition, using the non-smooth analysis and Lyapunov-based technique, the convergence property about the designed algorithm is further guaranteed. Finally, the effectiveness of the obtained results is shown while several examples are presented.
In this work, we introduce ADAPD, $textbf{A}$ $textbf{D}$ecentr$textbf{A}$lized $textbf{P}$rimal-$textbf{D}$ual algorithmic framework for solving non-convex and smooth consensus optimization problems over a network of distributed agents. ADAPD makes use of an inexact ADMM-type update. During each iteration, each agent first inexactly solves a local strongly convex subproblem and then performs a neighbor communication while updating a set of dual variables. Two variations to ADAPD are presented. The variants allow agents to balance the communication and computation workload while they collaboratively solve the consensus optimization problem. The optimal convergence rate for non-convex and smooth consensus optimization problems is established; namely, ADAPD achieves $varepsilon$-stationarity in $mathcal{O}(varepsilon^{-1})$ iterations. Numerical experiments demonstrate the superiority of ADAPD over several existing decentralized methods.
This paper deals with the distributed $mathcal{H}_2$ optimal control problem for linear multi-agent systems. In particular, we consider a suboptimal version of the distributed $mathcal{H}_2$ optimal control problem. Given a linear multi-agent system with identical agent dynamics and an associated $mathcal{H}_2$ cost functional, our aim is to design a distributed diffusive static protocol such that the protocol achieves state synchronization for the controlled network and such that the associated cost is smaller than an a priori given upper bound. We first analyze the $mathcal{H}_2$ performance of linear systems and then apply the results to linear multi-agent systems. Two design methods are provided to compute such a suboptimal distributed protocol. For each method, the expression for the local control gain involves a solution of a single Riccati inequality of dimension equal to the dimension of the individual agent dynamics, and the smallest nonzero and the largest eigenvalue of the graph Laplacian.
This article considers the $mathcal{H}_infty$ static output-feedback control for linear time-invariant uncertain systems with polynomial dependence on probabilistic time-invariant parametric uncertainties. By applying polynomial chaos theory, the control synthesis problem is solved using a high-dimensional expanded system which characterizes stochastic state uncertainty propagation. A closed-loop polynomial chaos transformation is proposed to derive the closed-loop expanded system. The approach explicitly accounts for the closed-loop dynamics and preserves the $mathcal{L}_2$-induced gain, which results in smaller transformation errors compared to existing polynomial chaos transformations. The effect of using finite-degree polynomial chaos expansions is first captured by a norm-bounded linear differential inclusion, and then addressed by formulating a robust polynomial chaos based control synthesis problem. This proposed approach avoids the use of high-degree polynomial chaos expansions to alleviate the destabilizing effect of truncation errors, which significantly reduces computational complexity. In addition, some analysis is given for the condition under which the robustly stabilized expanded system implies the robust stability of the original system. A numerical example illustrates the effectiveness of the proposed approach.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا