No Arabic abstract
The problem of state estimation in the setting of partially-observed discrete event systems subject to cyber attacks is considered. An operator observes a plant through a natural projection that hides the occurrence of certain events. The objective of the operator is that of estimating the current state of the system. The observation is corrupted by an attacker which can insert and erase some sensor readings with the aim of altering the state estimation of the operator. Furthermore, the attacker wants to remain stealthy, namely the operator should not realize that its observation has been corrupted. An automaton, called attack structure, is defined to describe the set of all possible attacks. In more details, first, an unbounded attack structure is obtained by concurrent composition of two state observers, the attacker observer and the operator observer. Then, the attack structure is refined to obtain a supremal stealthy attack substructure. An attack function may be selected from the supremal stealthy attack substructure and it is said harmful when some malicious goal of the attacker is reached, namely if the set of states consistent with the observation produced by the system and the set of states consistent with the corrupted observation belong to a given relation. The proposed approach can be dually used to verify if there exists a harmful attack for the given system: this allows one to establish if the system is safe under attack.
Opacity, as an important property in information-flow security, characterizes the ability of a system to keep some secret information from an intruder. In discrete-event systems, based on a standard setting in which an intruder has the complete knowledge of the systems structure, the standa
This paper deals with the state estimation problem in discrete-event systems modeled with nondeterministic finite automata, partially observed via a sensor measuring unit whose measurements (reported observations) may be vitiated by a malicious attacker. The attacks considered in this paper include arbitrary deletions, insertions, or substitutions of observed symbols by taking into account a bounded number of attacks or, more generally, a total cost constraint (assuming that each deletion, insertion, or substitution bears a positive cost to the attacker). An efficient approach is proposed to describe possible sequences of observations that match the one received by the measuring unit, as well as their corresponding state estimates and associated total costs. We develop an algorithm to obtain the least-cost matching sequences by reconstructing only a finite number of possible sequences, which we subsequently use to efficiently perform state estimation. We also develop a technique for verifying tamper-tolerant diagnosability under attacks that involve a bounded number of deletions, insertions, and substitutions (or, more generally, under attacks of bounded total cost) by using a novel structure obtained by attaching attacks and costs to the original plant. The overall construction and verification procedure have complexity that is of O(|X|^2 C^2),where |X| is the number of states of the given finite automaton and C is the maximum total cost that is allowed for all the deletions, insertions, and substitutions. We determine the minimum value of C such that the attacker can coordinate its tampering action to keep the observer indefinitely confused while utilizing a finite number of attacks. Several examples are presented to demonstrate the proposed methods.
This paper focuses on the problem of cyber attacks for discrete event systems under supervisory control. In more detail, the goal of the supervisor, who has a partial observation of the system evolution, is that of preventing the system from reaching a set of unsafe states. An attacker may act in two different ways: he can corrupt the observation of the supervisor editing the sensor readings, and can enable events that are disabled by the supervisor. This is done with the aim of leading the plant to an unsafe state, and keeping the supervisor unaware of that before the unsafe state is reached. A special automaton, called attack structure is constructed as the parallel composition of two special structures. Such an automaton can be used by the attacker to select appropriate actions (if any) to reach the above goal, or equivalently by the supervisor, to validate its robustness with respect to such attacks.
In this paper, an attack-resilient estimation algorithm is presented for linear discrete-time stochastic systems with state and input constraints. It is shown that the state estimation errors of the proposed estimation algorithm are practically exponentially stable.
Recently we developed supervisor localization, a top-down approach to distributed control of discrete-event systems. Its essence is the allocation of monolithic (global) control action among the local control strategies of individual agents. In this paper, we extend supervisor localization by considering partial observation; namely not all events are observable. Specifically, we employ the recently proposed concept of relative observability to compute a partial-observation monolithic supervisor, and then design a suitable localization procedure to decompose the supervisor into a set of local controllers. In the resulting local controllers, only observable events can cause state change. Further, to deal with large-scale systems, we combine the partial-observation supervisor localization with an efficient architectural synthesis approach: first compute a heterarchical array of partial-observation decentralized supervisors and coordinators, and then localize each of these supervisors/coordinators into local controllers.