ترغب بنشر مسار تعليمي؟ اضغط هنا

Data Attacks on Power System State Estimation: Limited Adversarial Knowledge vs. Limited Attack Resources

65   0   0.0 ( 0 )
 نشر من قبل Kaikai Pan
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

A class of data integrity attack, known as false data injection (FDI) attack, has been studied with a considerable amount of work. It has shown that with perfect knowledge of the system model and the capability to manipulate a certain number of measurements, the FDI attacks can coordinate measurements corruption to keep stealth against the bad data detection. However, a more realistic attack is essentially an attack with limited adversarial knowledge of the system model and limited attack resources due to various reasons. In this paper, we generalize the data attacks that they can be pure FDI attacks or combined with availability attacks (e.g., DoS attacks) and analyze the attacks with limited adversarial knowledge or limited attack resources. The attack impact is evaluated by the proposed metrics and the detection probability of attacks is calculated using the distribution property of data with or without attacks. The analysis is supported with results from a power system use case. The results show how important the knowledge is to the attacker and which measurements are more vulnerable to attacks with limited resources.



قيم البحث

اقرأ أيضاً

146 - Heng Chang , Yu Rong , Tingyang Xu 2021
With the success of the graph embedding model in both academic and industry areas, the robustness of graph embedding against adversarial attack inevitably becomes a crucial problem in graph learning. Existing works usually perform the attack in a whi te-box fashion: they need to access the predictions/labels to construct their adversarial loss. However, the inaccessibility of predictions/labels makes the white-box attack impractical to a real graph learning system. This paper promotes current frameworks in a more general and flexible sense -- we demand to attack various kinds of graph embedding models with black-box driven. We investigate the theoretical connections between graph signal processing and graph embedding models and formulate the graph embedding model as a general graph signal process with a corresponding graph filter. Therefore, we design a generalized adversarial attacker: GF-Attack. Without accessing any labels and model predictions, GF-Attack can perform the attack directly on the graph filter in a black-box fashion. We further prove that GF-Attack can perform an effective attack without knowing the number of layers of graph embedding models. To validate the generalization of GF-Attack, we construct the attacker on four popular graph embedding models. Extensive experiments validate the effectiveness of GF-Attack on several benchmark datasets.
Understanding smart grid cyber attacks is key for developing appropriate protection and recovery measures. Advanced attacks pursue maximized impact at minimized costs and detectability. This paper conducts risk analysis of combined data integrity and availability attacks against the power system state estimation. We compare the combined attacks with pure integrity attacks - false data injection (FDI) attacks. A security index for vulnerability assessment to these two kinds of attacks is proposed and formulated as a mixed integer linear programming problem. We show that such combined attacks can succeed with fewer resources than FDI attacks. The combined attacks with limited knowledge of the system model also expose advantages in keeping stealth against the bad data detection. Finally, the risk of combined attacks to reliable system operation is evaluated using the results from vulnerability assessment and attack impact analysis. The findings in this paper are validated and supported by a detailed case study.
Although deep neural networks (DNNs) have made rapid progress in recent years, they are vulnerable in adversarial environments. A malicious backdoor could be embedded in a model by poisoning the training dataset, whose intention is to make the infect ed model give wrong predictions during inference when the specific trigger appears. To mitigate the potential threats of backdoor attacks, various backdoor detection and defense methods have been proposed. However, the existing techniques usually require the poisoned training data or access to the white-box model, which is commonly unavailable in practice. In this paper, we propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model. We introduce a gradient-free optimization algorithm to reverse-engineer the potential trigger for each class, which helps to reveal the existence of backdoor attacks. In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models. Extensive experiments on hundreds of DNN models trained on several datasets corroborate the effectiveness of our method under the black-box setting against various backdoor attacks.
An unobservable false data injection (FDI) attack on AC state estimation (SE) is introduced and its consequences on the physical system are studied. With a focus on understanding the physical consequences of FDI attacks, a bi-level optimization probl em is introduced whose objective is to maximize the physical line flows subsequent to an FDI attack on DC SE. The maximization is subject to constraints on both attacker resources (size of attack) and attack detection (limiting load shifts) as well as those required by DC optimal power flow (OPF) following SE. The resulting attacks are tested on a more realistic non-linear system model using AC state estimation and ACOPF, and it is shown that, with an appropriately chosen sub-network, the attacker can overload transmission lines with moderate shifts of load.
Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limi ted data regimes. The approach does not require changes to loss functions or network architectures, and is applicable both when training from scratch and when fine-tuning an existing GAN on another dataset. We demonstrate, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images. We expect this to open up new application domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5.59 to 2.42.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا