No Arabic abstract
This paper focuses on the problem of cyber attacks for discrete event systems under supervisory control. In more detail, the goal of the supervisor, who has a partial observation of the system evolution, is that of preventing the system from reaching a set of unsafe states. An attacker may act in two different ways: he can corrupt the observation of the supervisor editing the sensor readings, and can enable events that are disabled by the supervisor. This is done with the aim of leading the plant to an unsafe state, and keeping the supervisor unaware of that before the unsafe state is reached. A special automaton, called attack structure is constructed as the parallel composition of two special structures. Such an automaton can be used by the attacker to select appropriate actions (if any) to reach the above goal, or equivalently by the supervisor, to validate its robustness with respect to such attacks.
The problem of state estimation in the setting of partially-observed discrete event systems subject to cyber attacks is considered. An operator observes a plant through a natural projection that hides the occurrence of certain events. The objective of the operator is that of estimating the current state of the system. The observation is corrupted by an attacker which can insert and erase some sensor readings with the aim of altering the state estimation of the operator. Furthermore, the attacker wants to remain stealthy, namely the operator should not realize that its observation has been corrupted. An automaton, called attack structure, is defined to describe the set of all possible attacks. In more details, first, an unbounded attack structure is obtained by concurrent composition of two state observers, the attacker observer and the operator observer. Then, the attack structure is refined to obtain a supremal stealthy attack substructure. An attack function may be selected from the supremal stealthy attack substructure and it is said harmful when some malicious goal of the attacker is reached, namely if the set of states consistent with the observation produced by the system and the set of states consistent with the corrupted observation belong to a given relation. The proposed approach can be dually used to verify if there exists a harmful attack for the given system: this allows one to establish if the system is safe under attack.
The resilience of cyberphysical systems to denial-of-service (DoS) and integrity attacks is studied in this paper. The cyberphysical system is modeled as a linear structured system, and its resilience to an attack is interpreted in a graph theoretical framework. The structural resilience of the system is characterized in terms of unmatched vertices in maximum matchings of the bipartite graph and connected components of directed graph representations of the system under attack. We first present conditions for the system to be resilient to DoS attacks when an adversary may block access or turn off certain inputs to the system. We extend this analysis to characterize resilience of the system when an adversary might additionally have the ability to affect the implementation of state-feedback control strategies. This is termed an integrity attack. We establish conditions under which a system that is structurally resilient to a DoS attack will also be resilient to a certain class of integrity attacks. Finally, we formulate an extension to the case of switched linear systems, and derive conditions for such systems to be structurally resilient to a DoS attack.
We address the problem of attack detection and isolation for a class of discrete-time nonlinear systems under (potentially unbounded) sensor attacks and measurement noise. We consider the case when a subset of sensors is subject to additive false data injection attacks. Using a bank of observers, each observer leading to an Input-to-State Stable (ISS) estimation error, we propose two algorithms for detecting and isolating sensor attacks. These algorithms make use of the ISS property of the observers to check whether the trajectories of observers are `consistent with the attack-free trajectories of the system. Simulations results are presented to illustrate the performance of the proposed algorithms.
Applying security as a lifecycle practice is becoming increasingly important to combat targeted attacks in safety-critical systems. Among others there are two significant challenges in this area: (1) the need for models that can characterize a realistic system in the absence of an implementation and (2) an automated way to associate attack vector information; that is, historical data, to such system models. We propose the cybersecurity body of knowledge (CYBOK), which takes in sufficiently characteristic models of systems and acts as a search engine for potential attack vectors. CYBOK is fundamentally an algorithmic approach to vulnerability exploration, which is a significant extension to the body of knowledge it builds upon. By using CYBOK, security analysts and system designers can work together to assess the overall security posture of systems early in their lifecycle, during major design decisions and before final product designs. Consequently, assisting in applying security earlier and throughout the systems lifecycle.
In this paper, we propose a new automaton property of N-step nonblockingness for a given positive integer N. This property quantifies the standard nonblocking property by capturing the practical requirement that all tasks be completed within a bounded number of steps. Accordingly, we formulate a new N-step nonblocking supervisory control problem, and characterize its solvability in terms of a new concept of N-step language completability. It is proved that there exists a unique supremal N-step completable sublanguage of a given language, and we develop a generator-based algorithm to compute the supremal sublanguage. Finally, together with the supremal controllable sublanguage, we design an algorithm to compute a maximally permissive supervisory control solution to the new N-step nonblocking supervisory control problem.