Do you want to publish a course? Click here

Quickest Detection of Deception Attacks in Networked Control Systems with Physical Watermarking

73   0   0.0 ( 0 )
 Added by Subhrakanti Dey
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

In this paper, we propose and analyze an attack detection scheme for securing the physical layer of a networked control system against attacks where the adversary replaces the true observations with stationary false data. An independent and identically distributed watermarking signal is added to the optimal linear quadratic Gaussian (LQG) control inputs, and a cumulative sum (CUSUM) test is carried out using the joint distribution of the innovation signal and the watermarking signal for quickest attack detection. We derive the expressions of the supremum of the average detection delay (SADD) for a multi-input and multi-output (MIMO) system under the optimal and sub-optimal CUSUM tests. The SADD is asymptotically inversely proportional to the expected Kullback-Leibler divergence (KLD) under certain conditions. The expressions for the MIMO case are simplified for multi-input and single-output systems and explored further to distil design insights. We provide insights into the design of an optimal watermarking signal to maximize KLD for a given fixed increase in LQG control cost when there is no attack. Furthermore, we investigate how the attacker and the control system designer can accomplish their respective objectives by changing the relative power of the attack signal and the watermarking signal. Simulations and numerical studies are carried out to validate the theoretical results.



rate research

Read More

In this paper, we investigate the role of a physical watermarking signal in quickest detection of a deception attack in a scalar linear control system where the sensor measurements can be replaced by an arbitrary stationary signal generated by an attacker. By adding a random watermarking signal to the control action, the controller designs a sequential test based on a Cumulative Sum (CUSUM) method that accumulates the log-likelihood ratio of the joint distribution of the residue and the watermarking signal (under attack) and the joint distribution of the innovations and the watermarking signal under no attack. As the average detection delay in such tests is asymptotically (as the false alarm rate goes to zero) upper bounded by a quantity inversely proportional to the Kullback-Leibler divergence(KLD) measure between the two joint distributions mentioned above, we analyze the effect of the watermarking signal variance on the above KLD. We also analyze the increase in the LQG control cost due to the watermarking signal, and show that there is a tradeoff between quick detection of attacks and the penalty in the control cost. It is shown that by considering a sequential detection test based on the joint distributions of residue/innovations and the watermarking signal, as opposed to the distributions of the residue/innovations only, we can achieve a higher KLD, thus resulting in a reduced average detection delay. Numerical results are provided to support our claims.
Networked robotic systems, such as connected vehicle platoons, can improve the safety and efficiency of transportation networks by allowing for high-speed coordination. To enable such coordination, these systems rely on networked communications. This can make them susceptible to cyber attacks. Though security methods such as encryption or specially designed network topologies can increase the difficulty of successfully executing such an attack, these techniques are unable to guarantee secure communication against an attacker. More troublingly, these security methods are unable to ensure that individual agents are able to detect attacks that alter the content of specific messages. To ensure resilient behavior under such attacks, this paper formulates a networked linear time-varying version of dynamic watermarking in which each agent generates and adds a private excitation to the input of its corresponding robotic subsystem. This paper demonstrates that such a method can enable each agent in a networked robotic system to detect cyber attacks. By altering measurements sent between vehicles, this paper illustrates that an attacker can create unstable behavior within a platoon. By utilizing the dynamic watermarking method proposed in this paper, the attack is detected, allowing the vehicles in the platoon to gracefully degrade to a non-communicative control strategy that maintains safety across a variety of scenarios.
137 - Dajun Du , Changda Zhang , Xue Li 2021
We here investigate secure control of networked control systems developing a new dynamic watermarking (DW) scheme. Firstly, the weaknesses of the conventional DW scheme are revealed, and the tradeoff between the effectiveness of false data injection attack (FDIA) detection and system performance loss is analysed. Secondly, we propose a new DW scheme, and its attack detection capability is interrogated using the additive distortion power of a closed-loop system. Furthermore, the FDIA detection effectiveness of the closed-loop system is analysed using auto/cross covariance of the signals, where the positive correlation between the FDIA detection effectiveness and the watermarking intensity is measured. Thirdly, the tolerance capacity of FDIA against the closed-loop system is investigated, and theoretical analysis shows that the system performance can be recovered from FDIA using our new DW scheme. Finally, experimental results from a networked inverted pendulum system demonstrate the validity of our proposed scheme.
Multiagent systems consist of agents that locally exchange information through a physical network subject to a graph topology. Current control methods for networked multiagent systems assume the knowledge of graph topologies in order to design distributed control laws for achieving desired global system behaviors. However, this assumption may not be valid for situations where graph topologies are subject to uncertainties either due to changes in the physical network or the presence of modeling errors especially for multiagent systems involving a large number of interacting agents. Motivating from this standpoint, this paper studies distributed control of networked multiagent systems with uncertain graph topologies. The proposed framework involves a controller architecture that has an ability to adapt its feed- back gains in response to system variations. Specifically, we analytically show that the proposed controller drives the trajectories of a networked multiagent system subject to a graph topology with time-varying uncertainties to a close neighborhood of the trajectories of a given reference model having a desired graph topology. As a special case, we also show that a networked multi-agent system subject to a graph topology with constant uncertainties asymptotically converges to the trajectories of a given reference model. Although the main result of this paper is presented in the context of average consensus problem, the proposed framework can be used for many other problems related to networked multiagent systems with uncertain graph topologies.
We study the problem of learning-based attacks in linear systems, where the communication channel between the controller and the plant can be hijacked by a malicious attacker. We assume the attacker learns the dynamics of the system from observations, then overrides the controllers actuation signal, while mimicking legitimate operation by providing fictitious sensor readings to the controller. On the other hand, the controller is on a lookout to detect the presence of the attacker and tries to enhance the detection performance by carefully crafting its control signals. We study the trade-offs between the information acquired by the attacker from observations, the detection capabilities of the controller, and the control cost. Specifically, we provide tight upper and lower bounds on the expected $epsilon$-deception time, namely the time required by the controller to make a decision regarding the presence of an attacker with confidence at least $(1-epsilonlog(1/epsilon))$. We then show a probabilistic lower bound on the time that must be spent by the attacker learning the system, in order for the controller to have a given expected $epsilon$-deception time. We show that this bound is also order optimal, in the sense that if the attacker satisfies it, then there exists a learning algorithm with the given order expected deception time. Finally, we show a lower bound on the expected energy expenditure required to guarantee detection with confidence at least $1-epsilon log(1/epsilon)$.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا