No Arabic abstract
In this paper, we study the resilient vector consensus problem in networks with adversarial agents and improve resilience guarantees of existing algorithms. A common approach to achieving resilient vector consensus is that every non-adversarial (or normal) agent in the network updates its state by moving towards a point in the convex hull of its emph{normal} neighbors states. Since an agent cannot distinguish between its normal and adversarial neighbors, computing such a point, often called as emph{safe point}, is a challenging task. To compute a safe point, we propose to use the notion of emph{centerpoint}, which is an extension of the median in higher dimensions, instead of Tverberg partition of points, which is often used for this purpose. We discuss that the notion of centerpoint provides a complete characterization of safe points in $mathbb{R}^d$. In particular, we show that a safe point is essentially an interior centerpoint if the number of adversaries in the neighborhood of a normal agent $i$ is less than $frac{N_i}{d+1} $, where $d$ is the dimension of the state vector and $N_i$ is the total number of agents in the neighborhood of $i$. Consequently, we obtain necessary and sufficient conditions on the number of adversarial agents to guarantee resilient vector consensus. Further, by considering the complexity of computing centerpoints, we discuss improvements in the resilience guarantees of vector consensus algorithms and compare with the other existing approaches. Finally, we numerically evaluate the performance of our approach through experiments.
In this paper, we study the relationship between resilience and accuracy in the resilient distributed multi-dimensional consensus problem. We consider a network of agents, each of which has a state in $mathbb{R}^d$. Some agents in the network are adversarial and can change their states arbitrarily. The normal (non-adversarial) agents interact locally and update their states to achieve consensus at some point in the convex hull $calC$ of their initial states. This objective is achievable if the number of adversaries in the neighborhood of normal agents is less than a specific value, which is a function of the local connectivity and the state dimension $d$. However, to be resilient against adversaries, especially in the case of large $d$, the desired local connectivity is large. We discuss that resilience against adversarial agents can be improved if normal agents are allowed to converge in a bounded region $calBsupseteqcalC$, which means normal agents converge at some point close to but not necessarily inside $calC$ in the worst case. The accuracy of resilient consensus can be measured by the Hausdorff distance between $calB$ and $calC$. As a result, resilience can be improved at the cost of accuracy. We propose a resilient bounded consensus algorithm that exploits the trade-off between resilience and accuracy by projecting $d$-dimensional states into lower dimensions and then solving instances of resilient consensus in lower dimensions. We analyze the algorithm, present various resilience and accuracy bounds, and also numerically evaluate our results.
We consider the distributed $H_infty$ estimation problem with an additional requirement of resilience to biasing attacks. An attack scenario is considered where an adversary misappropriates some of the observer nodes and injects biasing signals into observer dynamics. The paper proposes a procedure for the derivation of a distributed observer which endows each node with an attack detector which also functions as an attack compensating feedback controller for the main observer. Connecting these controlled observers into a network results in a distributed observer whose nodes produce unbiased robust estimates of the plant. We show that the gains for each controlled observer in the network can be computed in a decentralized fashion, thus reducing vulnerability of the network.
We study the distributed average consensus problem in multi-agent systems with directed communication links that are subject to quantized information flow. The goal of distributed average consensus is for the nodes, each associated with some initial value, to obtain the average (or some value close to the average) of these initial values. In this paper, we present and analyze a distributed averaging algorithm which operates exclusively with quantized values (specifically, the information stored, processed and exchanged between neighboring agents is subject to deterministic uniform quantization) and rely on event-driven updates (e.g., to reduce energy consumption, communication bandwidth, network congestion, and/or processor usage). We characterize the properties of the proposed distributed averaging protocol, illustrate its operation with an example, and show that its execution, on any timeinvariant and strongly connected digraph, will allow all agents to reach, in finite time, a common consensus value that is equal to the quantized average. We conclude with comparisons against existing quantized average consensus algorithms that illustrate the performance and potential advantages of the proposed algorithm.
In this paper, we consider the problem of privacy preservation in the average consensus problem when communication among nodes is quantized. More specifically, we consider a setting where some nodes in the network are curious but not malicious and they try to identify the initial states of other nodes based on the data they receive during their operation (without interfering in the computation in any other way), while some nodes in the network want to ensure that their initial states cannot be inferred exactly by the curious nodes. We propose two privacy-preserving event-triggered quantized average consensus algorithms that can be followed by any node wishing to maintain its privacy and not reveal the initial state it contributes to the average computation. Every node in the network (including the curious nodes) is allowed to execute a privacy-preserving algorithm or its underlying average consensus algorithm. Under certain topological conditions, both algorithms allow the nodes who adopt privacypreserving protocols to preserve the privacy of their initial quantized states and at the same time to obtain, after a finite number of steps, the exact average of the initial states.
In this paper, a distributed learning leader-follower consensus protocol based on Gaussian process regression for a class of nonlinear multi-agent systems with unknown dynamics is designed. We propose a distributed learning approach to predict the residual dynamics for each agent. The stability of the consensus protocol using the data-driven model of the dynamics is shown via Lyapunov analysis. The followers ultimately synchronize to the leader with guaranteed error bounds by applying the proposed control law with a high probability. The effectiveness and the applicability of the developed protocol are demonstrated by simulation examples.