No Arabic abstract
In this paper, we consider the problem of synthesis of maximally permissive covert damage-reachable attackers in the setup where the model of the supervisor is unknown to the adversary but the adversary has recorded a (prefix-closed) finite set of observations of the runs of the closed-loop system. The synthesized attacker needs to ensure both the damage-reachability and the covertness against all the supervisors which are consistent with the given set of observations. There is a gap between the de facto maximal permissiveness, assuming the model of the supervisor is known, and the maximal permissiveness that can be attained with a limited knowledge of the model of the supervisor, from the adversarys point of view. We consider the setup where the attacker can exercise sensor replacement/deletion attacks and actuator enablement/disablement attacks. The solution methodology proposed in this work is to reduce the synthesis of maximally permissive covert damage-reachable attackers, given the model of the plant and the finite set of observations, to the synthesis of maximally permissive safe supervisors for certain transformed plant, which shows the decidability of the observation-assisted covert attacker synthesis problem. The effectiveness of our approach is illustrated on a water tank example adapted from the literature.
In this paper, we introduce a robust sensor design framework to provide persuasion-based defense in stochastic control systems against an unknown type attacker with a control objective exclusive to its type. For effective control, such an attackers actions depend on its belief on the underlying state of the system. We design a robust linear-plus-noise signaling strategy to encode sensor outputs in order to shape the attackers belief in a strategic way and correspondingly to persuade the attacker to take actions that lead to minimum damage with respect to the systems objective. The specific model we adopt is a Gauss-Markov process driven by a controller with a (partially) unknown malicious/benign control objective. We seek to defend against the worst possible distribution over control objectives in a robust way under the solution concept of Stackelberg equilibrium, where the sensor is the leader. We show that a necessary and sufficient condition on the covariance matrix of the posterior belief is a certain linear matrix inequality and we provide a closed-form solution for the associated signaling strategy. This enables us to formulate an equivalent tractable problem, indeed a semi-definite program, to compute the robust sensor design strategies globally even though the original optimization problem is non-convex and highly nonlinear. We also extend this result to scenarios where the sensor makes noisy or partial measurements. Finally, we analyze the ensuing performance numerically for various scenarios.
We study finite-state controllers (FSCs) for partially observable Markov decision processes (POMDPs) that are provably correct with respect to given specifications. The key insight is that computing (randomised) FSCs on POMDPs is equivalent to - and computationally as hard as - synthesis for parametric Markov chains (pMCs). This correspondence allows to use tools for parameter synthesis in pMCs to compute correct-by-construction FSCs on POMDPs for a variety of specifications. Our experimental evaluation shows comparable performance to well-known POMDP solvers.
Timed automata are a convenient mathematical model for modelling and reasoning about real-time systems. While they provide a powerful way of representing timing aspects of such systems, timed automata assume arbitrary precision and zero-delay actions; in particular, a state might be declared reachable in a timed automaton, but impossible to reach in the physical system it models. In this paper, we consider permissive strategies as a way to overcome this problem: such strategies propose intervals of delays instead of single delays, and aim at reaching a target state whichever delay actually takes place. We develop an algorithm for computing the optimal permissiveness (and an associated maximally-permissive strategy) in acyclic timed automata and games.
The deployment of autonomous systems that operate in unstructured environments necessitates algorithms to verify their safety. This can be challenging due to, e.g., black-box components in the control software, or undermodelled dynamics that prevent model-based verification. We present a novel verification framework for an unknown dynamical system from a given set of noisy observations of the dynamics. Using Gaussian processes trained on this data set, the framework abstracts the system as an uncertain Markov process with discrete states defined over the safe set. The transition bounds of the abstraction are derived from the probabilistic error bounds between the regression and underlying system. An existing approach for verifying safety properties over uncertain Markov processes then generates safety guarantees. We demonstrate the versatility of the framework on several examples, including switched and nonlinear systems.
We present a novel class of nonlinear controllers that interpolates among differently behaving linear controllers as a case study for recently proposed Linear and Nonlinear System Level Synthesis framework. The structure of the nonlinear controller allows for simultaneously satisfying performance and safety objectives defined for small- and large-disturbance regimes. The proposed controller is distributed, handles delays, sparse actuation, and localizes disturbances. We show our nonlinear controller always outperforms its linear counterpart for constrained LQR problems. We further demonstrate the anti-windup property of an augmented control strategy based on the proposed controller for saturated systems via simulation.