Attack Allocation on Remote State Estimation in Multi-Systems: Structural Results and Asymptotic Solution


الملخص بالإنكليزية

This paper considers optimal attack attention allocation on remote state estimation in multi-systems. Suppose there are $mathtt{M}$ independent systems, each of which has a remote sensor monitoring the system and sending its local estimates to a fusion center over a packet-dropping channel. An attacker may generate noises to exacerbate the communication channels between sensors and the fusion center. Due to capacity limitation, at each time the attacker can exacerbate at most $mathtt{N}$ of the $mathtt{M}$ channels. The goal of the attacker side is to seek an optimal policy maximizing the estimation error at the fusion center. The problem is formulated as a Markov decision process (MDP) problem, and the existence of an optimal deterministic and stationary policy is proved. We further show that the optimal policy has a threshold structure, by which the computational complexity is reduced significantly. Based on the threshold structure, a myopic policy is proposed for homogeneous models and its optimality is established. To overcome the curse of dimensionality of MDP algorithms for general heterogeneous models, we further provide an asymptotically (as $mathtt{M}$ and $mathtt{N}$ go to infinity) optimal solution, which is easy to compute and implement. Numerical examples are given to illustrate the main results.

تحميل البحث