Do you want to publish a course? Click here

Secure Analysis of Dynamic Networks under Pinning Attacks against Synchronization

129   0   0.0 ( 0 )
 Added by Yuhze Li
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

In this paper, we first consider a pinning node selection and control gain co-design problem for complex networks. A necessary and sufficient condition for the synchronization of the pinning controlled networks at a homogeneous state is provided. A quantitative model is built to describe the pinning costs and to formulate the pinning node selection and control gain design problem for different scenarios into the corresponding optimization problems. Algorithms to solve these problems efficiently are presented. Based on the developed results, we take the existence of a malicious attacker into consideration and a resource allocation model for the defender and the malicious attacker is described. We set up a leader-follower Stackelberg game framework to study the behaviour of both sides and the equilibrium of this security game is investigated. Numerical examples and simulations are presented to demonstrate the main results.



rate research

Read More

We study how to secure distributed filters for linear time-invariant systems with bounded noise under false-data injection attacks. A malicious attacker is able to arbitrarily manipulate the observations for a time-varying and unknown subset of the sensors. We first propose a recursive distributed filter consisting of two steps at each update. The first step employs a saturation-like scheme, which gives a small gain if the innovation is large corresponding to a potential attack. The second step is a consensus operation of state estimates among neighboring sensors. We prove the estimation error is upper bounded if the filter parameters satisfy a condition. We further analyze the feasibility of the condition and connect it to sparse observability in the centralized case. When the attacked sensor set is known to be time-invariant, the secured filter is modified by adding an online local attack detector. The detector is able to identify the attacked sensors whose observation innovations are larger than the detection thresholds. Also, with more attacked sensors being detected, the thresholds will adaptively adjust to reduce the space of the stealthy attack signals. The resilience of the secured filter with detection is verified by an explicit relationship between the upper bound of the estimation error and the number of detected attacked sensors. Moreover, for the noise-free case, we prove that the state estimate of each sensor asymptotically converges to the system state under certain conditions. Numerical simulations are provided to illustrate the developed results.
We investigate robustness of correlated networks against propagating attacks modeled by a susceptible-infected-removed model. By Monte-Carlo simulations, we numerically determine the first critical infection rate, above which a global outbreak of disease occurs, and the second critical infection rate, above which disease disintegrates the network. Our result shows that correlated networks are robust compared to the uncorrelated ones, regardless of whether they are assortative or disassortative, when a fraction of infected nodes in an initial state is not too large. For large initial fraction, disassortative network becomes fragile while assortative network holds robustness. This behavior is related to the layered network structure inevitably generated by a rewiring procedure we adopt to realize correlated networks.
In this work, we use the spectral properties of graphons to study stability and sensitivity to noise of deterministic SIS epidemics over large networks. We consider the presence of additive noise in a linearized SIS model and we derive a noise index to quantify the deviation from the disease-free state due to noise. For finite networks, we show that the index depends on the adjacency eigenvalues of its graph. We then assume that the graph is a random sample from a piecewise Lipschitz graphon with finite rank and, using the eigenvalues of the associated graphon operator, we find an approximation of the index that is tight when the network size goes to infinity. A numerical example is included to illustrate the results.
112 - Tianci Yang , Chen Lv 2021
By using various sensors to measure the surroundings and sharing local sensor information with the surrounding vehicles through wireless networks, connected and automated vehicles (CAVs) are expected to increase safety, efficiency, and capacity of our transportation systems. However, the increasing usage of sensors has also increased the vulnerability of CAVs to sensor faults and adversarial attacks. Anomalous sensor values resulting from malicious cyberattacks or faulty sensors may cause severe consequences or even fatalities. In this paper, we increase the resilience of CAVs to faults and attacks by using multiple sensors for measuring the same physical variable to create redundancy. We exploit this redundancy and propose a sensor fusion algorithm for providing a robust estimate of the correct sensor information with bounded errors independent of the attack signals, and for attack detection and isolation. The proposed sensor fusion framework is applicable to a large class of security-critical Cyber-Physical Systems (CPSs). To minimize the performance degradation resulting from the usage of estimation for control, we provide an $H_{infty}$ controller for CACC-equipped CAVs capable of stabilizing the closed-loop dynamics of each vehicle in the platoon while reducing the joint effect of estimation errors and communication channel noise on the tracking performance and string behavior of the vehicle platoon. Numerical examples are presented to illustrate the effectiveness of our methods.
The security of mobile robotic networks (MRNs) has been an active research topic in recent years. This paper demonstrates that the observable interaction process of MRNs under formation control will present increasingly severe threats. Specifically, we find that an external attack robot, who has only partial observation over MRNs while not knowing the system dynamics or access, can learn the interaction rules from observations and utilize them to replace a target robot, destroying the cooperation performance of MRNs. We call this novel attack as sneak, which endows the attacker with the intelligence of learning knowledge and is hard to be tackled by traditional defense techniques. The key insight is to separately reveal the internal interaction structure within robots and the external interaction mechanism with the environment, from the coupled state evolution influenced by the model-unknown rules and unobservable part of the MRN. To address this issue, we first provide general interaction process modeling and prove the learnability of the interaction rules. Then, with the learned rules, we design an Evaluate-Cut-Restore (ECR) attack strategy considering the partial interaction structure and geometric pattern. We also establish the sufficient conditions for a successful sneak with maximum control impacts over the MRN. Extensive simulations illustrate the feasibility and effectiveness of the proposed attack.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا