ترغب بنشر مسار تعليمي؟ اضغط هنا

Distributed Averaging Problems over Directed Signed Networks

258   0   0.0 ( 0 )
 نشر من قبل Mingjun Du
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper aims at addressing distributed averaging problems for signed networks in the presence of general directed topologies that are represented by signed digraphs. A new class of improved Laplacian potential functions is proposed by presenting two notions of any signed digraph: induced unsigned digraph and mirror (undirected) signed graph, based on which two distributed averaging protocols are designed using the nearest neighbor rules. It is shown that with any of the designed protocols, signed-average consensus (respectively, state stability) can be achieved if and only if the associated signed digraph of signed network is structurally balanced (respectively, unbalanced), regardless of whether weight balance is satisfied or not. Further, improved Laplacian potential functions can be exploited to solve fixed-time consensus problems of signed networks with directed topologies, in which a nonlinear distributed protocol is proposed to ensure the bipartite consensus or state stability within a fixed time. Additionally, the convergence analyses of directed signed networks can be implemented with the Lyapunov stability analysis method, which is realized by revealing the tight relationship between convergence behaviors of directed signed networks and properties of improved Laplacian potential functions. Illustrative examples are presented to demonstrate the validity of our theoretical results for directed signed networks.

قيم البحث

اقرأ أيضاً

Distributed processing over networks relies on in-network processing and cooperation among neighboring agents. Cooperation is beneficial when agents share a common objective. However, in many applications agents may belong to different clusters that pursue different objectives. Then, indiscriminate cooperation will lead to undesired results. In this work, we propose an adaptive clustering and learning scheme that allows agents to learn which neighbors they should cooperate with and which other neighbors they should ignore. In doing so, the resulting algorithm enables the agents to identify their clusters and to attain improved learning and estimation accuracy over networks. We carry out a detailed mean-square analysis and assess the error probabilities of Types I and II, i.e., false alarm and mis-detection, for the clustering mechanism. Among other results, we establish that these probabilities decay exponentially with the step-sizes so that the probability of correct clustering can be made arbitrarily close to one.
190 - Chuanye Gu , Zhiyou Wu , Jueyou Li 2018
This paper considers a distributed convex optimization problem over a time-varying multi-agent network, where each agent has its own decision variables that should be set so as to minimize its individual objective subject to local constraints and glo bal coupling equality constraints. Over directed graphs, a distributed algorithm is proposed that incorporates the push-sum protocol into dual subgradient methods. Under the convexity assumption, the optimality of primal and dual variables, and constraint violations is first established. Then the explicit convergence rates of the proposed algorithm are obtained. Finally, some numerical experiments on the economic dispatch problem are provided to demonstrate the efficacy of the proposed algorithm.
In this paper, we consider the binary classification problem via distributed Support-Vector-Machines (SVM), where the idea is to train a network of agents, with limited share of data, to cooperatively learn the SVM classifier for the global database. Agents only share processed information regarding the classifier parameters and the gradient of the local loss functions instead of their raw data. In contrast to the existing work, we propose a continuous-time algorithm that incorporates network topology changes in discrete jumps. This hybrid nature allows us to remove chattering that arises because of the discretization of the underlying CT process. We show that the proposed algorithm converges to the SVM classifier over time-varying weight balanced directed graphs by using arguments from the matrix perturbation theory.
371 - Zhuoqing Song , Lei Shi , Shi Pu 2021
In this paper, we propose two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed network topology. In the first part, we consider a novel communication-efficient gradient tracking based method, termed Compressed Push-Pull (CPP), which combines the Push-Pull method with communication compression. We show that CPP is applicable to a general class of unbiased compression operators and achieves linear convergence for strongly convex and smooth objective functions. In the second part, we propose a broadcast-like version of CPP (B-CPP), which also achieves linear convergence rate under the same conditions for the objective functions. B-CPP can be applied in an asynchronous broadcast setting and further reduce communication costs compared to CPP. Numerical experiments complement the theoretical analysis and confirm the effectiveness of the proposed methods.
This paper deals with linear algebraic equations where the global coefficient matrix and constant vector are given respectively, by the summation of the coefficient matrices and constant vectors of the individual agents. Our approach is based on refo rmulating the original problem as an unconstrained optimization. Based on this exact reformulation, we first provide a gradient-based, centralized algorithm which serves as a reference for the ensuing design of distributed algorithms. We propose two sets of exponentially stable continuous-time distributed algorithms that do not require the individual agent matrices to be invertible, and are based on estimating non-distributed terms in the centralized algorithm using dynamic average consensus. The first algorithm works for time-varying weight-balanced directed networks, and the second algorithm works for general directed networks for which the communication graphs might not be balanced. Numerical simulations illustrate our results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا