No Arabic abstract
We consider the distributed $H_infty$ estimation problem with an additional requirement of resilience to biasing attacks. An attack scenario is considered where an adversary misappropriates some of the observer nodes and injects biasing signals into observer dynamics. The paper proposes a procedure for the derivation of a distributed observer which endows each node with an attack detector which also functions as an attack compensating feedback controller for the main observer. Connecting these controlled observers into a network results in a distributed observer whose nodes produce unbiased robust estimates of the plant. We show that the gains for each controlled observer in the network can be computed in a decentralized fashion, thus reducing vulnerability of the network.
We consider the distributed $H_infty$ estimation problem with additional requirement of resilience to biasing attacks. An attack scenario is considered where an adversary misappropriates some of the observer nodes and injects biasing signals into observer dynamics. Using a dynamic modelling of biasing attack inputs, a novel distributed state estimation algorithm is proposed which involves feedback from a network of attack detection filters. We show that each observer in the network can be computed in real time and in a decentralized fashion. When these controlled observers are interconnected to form a network, they are shown to cooperatively produce an unbiased estimate the plant, despite some of the nodes are compromised.
We develop a decentralized $H_infty$ synthesis approach to detection of biasing misappropriation attacks on distributed observers. Its starting point is to equip the observer with an attack model which is then used in the design of attack detectors. A two-step design procedure is proposed. First, an initial centralized setup is carried out which enables each node to compute the parameters of its attack detector online in a decentralized manner, without interacting with other nodes. Each such detector is designed using the $H_infty$ approach. Next, the attack detectors are embedded into the network, which allows them to detect misappropriated nodes from innovation in the network interconnections.
We study how to secure distributed filters for linear time-invariant systems with bounded noise under false-data injection attacks. A malicious attacker is able to arbitrarily manipulate the observations for a time-varying and unknown subset of the sensors. We first propose a recursive distributed filter consisting of two steps at each update. The first step employs a saturation-like scheme, which gives a small gain if the innovation is large corresponding to a potential attack. The second step is a consensus operation of state estimates among neighboring sensors. We prove the estimation error is upper bounded if the filter parameters satisfy a condition. We further analyze the feasibility of the condition and connect it to sparse observability in the centralized case. When the attacked sensor set is known to be time-invariant, the secured filter is modified by adding an online local attack detector. The detector is able to identify the attacked sensors whose observation innovations are larger than the detection thresholds. Also, with more attacked sensors being detected, the thresholds will adaptively adjust to reduce the space of the stealthy attack signals. The resilience of the secured filter with detection is verified by an explicit relationship between the upper bound of the estimation error and the number of detected attacked sensors. Moreover, for the noise-free case, we prove that the state estimate of each sensor asymptotically converges to the system state under certain conditions. Numerical simulations are provided to illustrate the developed results.
The paper addresses the problem of detecting attacks on distributed estimator networks that aim to intentionally bias process estimates produced by the network. It provides a sufficient condition, in terms of the feasibility of certain linear matrix inequalities, which guarantees distributed input attack detection using an $H_infty$ approach.
In this paper, we study the resilient vector consensus problem in networks with adversarial agents and improve resilience guarantees of existing algorithms. A common approach to achieving resilient vector consensus is that every non-adversarial (or normal) agent in the network updates its state by moving towards a point in the convex hull of its emph{normal} neighbors states. Since an agent cannot distinguish between its normal and adversarial neighbors, computing such a point, often called as emph{safe point}, is a challenging task. To compute a safe point, we propose to use the notion of emph{centerpoint}, which is an extension of the median in higher dimensions, instead of Tverberg partition of points, which is often used for this purpose. We discuss that the notion of centerpoint provides a complete characterization of safe points in $mathbb{R}^d$. In particular, we show that a safe point is essentially an interior centerpoint if the number of adversaries in the neighborhood of a normal agent $i$ is less than $frac{N_i}{d+1} $, where $d$ is the dimension of the state vector and $N_i$ is the total number of agents in the neighborhood of $i$. Consequently, we obtain necessary and sufficient conditions on the number of adversarial agents to guarantee resilient vector consensus. Further, by considering the complexity of computing centerpoints, we discuss improvements in the resilience guarantees of vector consensus algorithms and compare with the other existing approaches. Finally, we numerically evaluate the performance of our approach through experiments.