Do you want to publish a course? Click here

Interplay Between Resilience and Accuracy in Resilient Vector Consensus in Multi-Agent Networks

181   0   0.0 ( 0 )
 Added by Waseem Abbas
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this paper, we study the relationship between resilience and accuracy in the resilient distributed multi-dimensional consensus problem. We consider a network of agents, each of which has a state in $mathbb{R}^d$. Some agents in the network are adversarial and can change their states arbitrarily. The normal (non-adversarial) agents interact locally and update their states to achieve consensus at some point in the convex hull $calC$ of their initial states. This objective is achievable if the number of adversaries in the neighborhood of normal agents is less than a specific value, which is a function of the local connectivity and the state dimension $d$. However, to be resilient against adversaries, especially in the case of large $d$, the desired local connectivity is large. We discuss that resilience against adversarial agents can be improved if normal agents are allowed to converge in a bounded region $calBsupseteqcalC$, which means normal agents converge at some point close to but not necessarily inside $calC$ in the worst case. The accuracy of resilient consensus can be measured by the Hausdorff distance between $calB$ and $calC$. As a result, resilience can be improved at the cost of accuracy. We propose a resilient bounded consensus algorithm that exploits the trade-off between resilience and accuracy by projecting $d$-dimensional states into lower dimensions and then solving instances of resilient consensus in lower dimensions. We analyze the algorithm, present various resilience and accuracy bounds, and also numerically evaluate our results.



rate research

Read More

In this paper, we study the resilient vector consensus problem in networks with adversarial agents and improve resilience guarantees of existing algorithms. A common approach to achieving resilient vector consensus is that every non-adversarial (or normal) agent in the network updates its state by moving towards a point in the convex hull of its emph{normal} neighbors states. Since an agent cannot distinguish between its normal and adversarial neighbors, computing such a point, often called as emph{safe point}, is a challenging task. To compute a safe point, we propose to use the notion of emph{centerpoint}, which is an extension of the median in higher dimensions, instead of Tverberg partition of points, which is often used for this purpose. We discuss that the notion of centerpoint provides a complete characterization of safe points in $mathbb{R}^d$. In particular, we show that a safe point is essentially an interior centerpoint if the number of adversaries in the neighborhood of a normal agent $i$ is less than $frac{N_i}{d+1} $, where $d$ is the dimension of the state vector and $N_i$ is the total number of agents in the neighborhood of $i$. Consequently, we obtain necessary and sufficient conditions on the number of adversarial agents to guarantee resilient vector consensus. Further, by considering the complexity of computing centerpoints, we discuss improvements in the resilience guarantees of vector consensus algorithms and compare with the other existing approaches. Finally, we numerically evaluate the performance of our approach through experiments.
This paper considers the multi-agent reinforcement learning (MARL) problem for a networked (peer-to-peer) system in the presence of Byzantine agents. We build on an existing distributed $Q$-learning algorithm, and allow certain agents in the network to behave in an arbitrary and adversarial manner (as captured by the Byzantine attack model). Under the proposed algorithm, if the network topology is $(2F+1)$-robust and up to $F$ Byzantine agents exist in the neighborhood of each regular agent, we establish the almost sure convergence of all regular agents value functions to the neighborhood of the optimal value function of all regular agents. For each state, if the optimal $Q$-values of all regular agents corresponding to different actions are sufficiently separated, our approach allows each regular agent to learn the optimal policy for all regular agents.
157 - Yutao Tang 2020
This paper investigates an optimal consensus problem for a group of uncertain linear multi-agent systems. All agents are allowed to possess parametric uncertainties that range over an arbitrarily large compact set. The goal is to collectively minimize a sum of local costs in a distributed fashion and finally achieve an output consensus on this optimal point using only output information of agents. By adding an optimal signal generator to generate the global optimal point, we convert this problem to several decentralized robust tracking problems. Output feedback integral control is constructively given to achieve an optimal consensus under a mild graph connectivity condition. The efficacy of this control is verified by a numerical example.
In this paper we propose a novel method to establish stability and, in addition, convergence to a consensus state for a class of discrete-time Multi-Agent System (MAS) evolving according to nonlinear heterogeneous local interaction rules which is not based on Lyapunov function arguments. In particular, we focus on a class of discrete-time MASs whose global dynamics can be represented by sub-homogeneous and order-preserving nonlinear maps. This paper directly generalizes results for sub-homogeneous and order-preserving linear maps which are shown to be the counterpart to stochastic matrices thanks to nonlinear Perron-Frobenius theory. We provide sufficient conditions on the structure of local interaction rules among agents to establish convergence to a fixed point and study the consensus problem in this generalized framework as a particular case. Examples to show the effectiveness of the method are provided to corroborate the theoretical analysis.
In this paper, a distributed learning leader-follower consensus protocol based on Gaussian process regression for a class of nonlinear multi-agent systems with unknown dynamics is designed. We propose a distributed learning approach to predict the residual dynamics for each agent. The stability of the consensus protocol using the data-driven model of the dynamics is shown via Lyapunov analysis. The followers ultimately synchronize to the leader with guaranteed error bounds by applying the proposed control law with a high probability. The effectiveness and the applicability of the developed protocol are demonstrated by simulation examples.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا