Persuasion-based Robust Sensor Design Against Attackers with Unknown Control Objectives


Abstract in English

In this paper, we introduce a robust sensor design framework to provide persuasion-based defense in stochastic control systems against an unknown type attacker with a control objective exclusive to its type. For effective control, such an attackers actions depend on its belief on the underlying state of the system. We design a robust linear-plus-noise signaling strategy to encode sensor outputs in order to shape the attackers belief in a strategic way and correspondingly to persuade the attacker to take actions that lead to minimum damage with respect to the systems objective. The specific model we adopt is a Gauss-Markov process driven by a controller with a (partially) unknown malicious/benign control objective. We seek to defend against the worst possible distribution over control objectives in a robust way under the solution concept of Stackelberg equilibrium, where the sensor is the leader. We show that a necessary and sufficient condition on the covariance matrix of the posterior belief is a certain linear matrix inequality and we provide a closed-form solution for the associated signaling strategy. This enables us to formulate an equivalent tractable problem, indeed a semi-definite program, to compute the robust sensor design strategies globally even though the original optimization problem is non-convex and highly nonlinear. We also extend this result to scenarios where the sensor makes noisy or partial measurements. Finally, we analyze the ensuing performance numerically for various scenarios.

Download