ترغب بنشر مسار تعليمي؟ اضغط هنا

Distributed support-vector-machine over dynamic balanced directed networks

213   0   0.0 ( 0 )
 نشر من قبل Mohammadreza Doostmohammadian
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we consider the binary classification problem via distributed Support-Vector-Machines (SVM), where the idea is to train a network of agents, with limited share of data, to cooperatively learn the SVM classifier for the global database. Agents only share processed information regarding the classifier parameters and the gradient of the local loss functions instead of their raw data. In contrast to the existing work, we propose a continuous-time algorithm that incorporates network topology changes in discrete jumps. This hybrid nature allows us to remove chattering that arises because of the discretization of the underlying CT process. We show that the proposed algorithm converges to the SVM classifier over time-varying weight balanced directed graphs by using arguments from the matrix perturbation theory.



قيم البحث

اقرأ أيضاً

Dynamic Multi-objective Optimization Problems (DMOPs) refer to optimization problems that objective functions will change with time. Solving DMOPs implies that the Pareto Optimal Set (POS) at different moments can be accurately found, and this is a v ery difficult job due to the dynamics of the optimization problems. The POS that have been obtained in the past can help us to find the POS of the next time more quickly and accurately. Therefore, in this paper we present a Support Vector Machine (SVM) based Dynamic Multi-Objective Evolutionary optimization Algorithm, called SVM-DMOEA. The algorithm uses the POS that has been obtained to train a SVM and then take the trained SVM to classify the solutions of the dynamic optimization problem at the next moment, and thus it is able to generate an initial population which consists of different individuals recognized by the trained SVM. The initial populuation can be fed into any population based optimization algorithm, e.g., the Nondominated Sorting Genetic Algorithm II (NSGA-II), to get the POS at that moment. The experimental results show the validity of our proposed approach.
This paper aims at addressing distributed averaging problems for signed networks in the presence of general directed topologies that are represented by signed digraphs. A new class of improved Laplacian potential functions is proposed by presenting t wo notions of any signed digraph: induced unsigned digraph and mirror (undirected) signed graph, based on which two distributed averaging protocols are designed using the nearest neighbor rules. It is shown that with any of the designed protocols, signed-average consensus (respectively, state stability) can be achieved if and only if the associated signed digraph of signed network is structurally balanced (respectively, unbalanced), regardless of whether weight balance is satisfied or not. Further, improved Laplacian potential functions can be exploited to solve fixed-time consensus problems of signed networks with directed topologies, in which a nonlinear distributed protocol is proposed to ensure the bipartite consensus or state stability within a fixed time. Additionally, the convergence analyses of directed signed networks can be implemented with the Lyapunov stability analysis method, which is realized by revealing the tight relationship between convergence behaviors of directed signed networks and properties of improved Laplacian potential functions. Illustrative examples are presented to demonstrate the validity of our theoretical results for directed signed networks.
In this paper we study the distributed average consensus problem in multi-agent systems with directed communication links that are subject to quantized information flow. Specifically, we present and analyze a distributed averaging algorithm which ope rates exclusively with quantized values (i.e., the information stored, processed and exchanged between neighboring agents is subject to deterministic uniform quantization) and relies on event-driven updates (e.g., to reduce energy consumption, communication bandwidth, network congestion, and/or processor usage). The main idea of the proposed algorithm is that each node (i) models its initial state as two quantized fractions which have numerators equal to the nodes initial state and denominators equal to one, and (ii) transmits one fraction randomly while it keeps the other stored. Then, every time it receives one or more fractions, it averages their numerators with the numerator of the fraction it stored, and then transmits them to randomly selected out-neighbors. We characterize the properties of the proposed distributed algorithm and show that its execution, on any static and strongly connected digraph, allows each agent to reach in finite time a fixed state that is equal (within one quantisation level) to the average of the initial states. We extend the operation of the algorithm to achieve finite-time convergence in the presence of a dynamic directed communication topology subject to some connectivity conditions. Finally, we provide examples to illustrate the operation, performance, and potential advantages of the proposed algorithm. We compare against state-of-the-art quantized average consensus algorithms and show that our algorithms convergence speed significantly outperforms most existing protocols.
This article focuses on multi-agent distributed optimization problems with a common decision variable, a global linear equality constraint, and local set constraints over directed interconnection topologies. We propose a novel ADMM based distributed algorithm to solve the above problem. During every iteration of the algorithm, each agent solves a local convex optimization problem and utilizes a finite-time ``approximate consensus protocol to update its local estimate of the optimal solution. The proposed algorithm is the first ADMM based algorithm with convergence guarantees to solve distributed multi-agent optimization problems where the interconnection topology is directed. We establish two strong explicit convergence rate estimates for the proposed algorithm to the optimal solution under two different sets of assumptions on the problem data. Further, we evaluate our proposed algorithm by solving two non-linear and non-differentiable constrained distributed optimization problems over directed graphs. Additionally, we provide a numerical comparison of the proposed algorithm with other state-of-the-art algorithms to show its efficacy over the existing methods in the literature.
In this paper, we consider the problem of optimally coordinating the response of a group of distributed energy resources (DERs) so they collectively meet the electric power demanded by a collection of loads, while minimizing the total generation cost and respecting the DER capacity limits. This problem can be cast as a convex optimization problem, where the global objective is to minimize a sum of convex functions corresponding to individual DER generation cost, while satisfying (i) linear inequality constraints corresponding to the DER capacity limits and (ii) a linear equality constraint corresponding to the total power generated by the DERs being equal to the total power demand. We develop distributed algorithms to solve the DER coordination problem over time-varying communication networks with either bidirectional or unidirectional communication links. The proposed algorithms can be seen as distribute

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا