ﻻ يوجد ملخص باللغة العربية
A class of data integrity attack, known as false data injection (FDI) attack, has been studied with a considerable amount of work. It has shown that with perfect knowledge of the system model and the capability to manipulate a certain number of measurements, the FDI attacks can coordinate measurements corruption to keep stealth against the bad data detection. However, a more realistic attack is essentially an attack with limited adversarial knowledge of the system model and limited attack resources due to various reasons. In this paper, we generalize the data attacks that they can be pure FDI attacks or combined with availability attacks (e.g., DoS attacks) and analyze the attacks with limited adversarial knowledge or limited attack resources. The attack impact is evaluated by the proposed metrics and the detection probability of attacks is calculated using the distribution property of data with or without attacks. The analysis is supported with results from a power system use case. The results show how important the knowledge is to the attacker and which measurements are more vulnerable to attacks with limited resources.
With the success of the graph embedding model in both academic and industry areas, the robustness of graph embedding against adversarial attack inevitably becomes a crucial problem in graph learning. Existing works usually perform the attack in a whi
Understanding smart grid cyber attacks is key for developing appropriate protection and recovery measures. Advanced attacks pursue maximized impact at minimized costs and detectability. This paper conducts risk analysis of combined data integrity and
Although deep neural networks (DNNs) have made rapid progress in recent years, they are vulnerable in adversarial environments. A malicious backdoor could be embedded in a model by poisoning the training dataset, whose intention is to make the infect
An unobservable false data injection (FDI) attack on AC state estimation (SE) is introduced and its consequences on the physical system are studied. With a focus on understanding the physical consequences of FDI attacks, a bi-level optimization probl
Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limi