ترغب بنشر مسار تعليمي؟ اضغط هنا

Adversarial Attacks to Scale-Free Networks: Testing the Robustness of Physical Criteria

185   0   0.0 ( 0 )
 نشر من قبل Jinhuan Wang
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Adversarial attacks have been alerting the artificial intelligence community recently, since many machine learning algorithms were found vulnerable to malicious attacks. This paper studies adversarial attacks to scale-free networks to test their robustness in terms of statistical measures. In addition to the well-known random link rewiring (RLR) attack, two heuristic attacks are formulated and simulated: degree-addition-based link rewiring (DALR) and degree-interval-based link rewiring (DILR). These three strategies are applied to attack a number of strong scale-free networks of various sizes generated from the Barabasi-Albert model. It is found that both DALR and DILR are more effective than RLR, in the sense that rewiring a smaller number of links can succeed in the same attack. However, DILR is as concealed as RLR in the sense that they both are constructed by introducing a relatively small number of changes on several typical structural properties such as average shortest path-length, average clustering coefficient, and average diagonal distance. The results of this paper suggest that to classify a network to be scale-free has to be very careful from the viewpoint of adversarial attack effects.

قيم البحث

اقرأ أيضاً

Recently, Broido & Clauset (2019) mentioned that (strict) Scale-Free networks were rare, in real life. This might be related to the statement of Stumpf, Wiuf & May (2005), that sub-networks of scale-free networks are not scale-free. In the later, tho se sub-networks are asymptotically scale-free, but one should not forget about second-order deviation (possibly also third order actually). In this article, we introduce a concept of extended scale-free network, inspired by the extended Pareto distribution, that actually is maybe more realistic to describe real network than the strict scale free property. This property is consistent with Stumpf, Wiuf & May (2005): sub-network of scale-free larger networks are not strictly scale-free, but extended scale-free.
Disinformation continues to attract attention due to its increasing threat to society. Nevertheless, a disinformation-based attack on critical infrastructure has never been studied to date. Here, we consider traffic networks and focus on fake informa tion that manipulates drivers decisions to create congestion. We study the optimization problem faced by the adversary when choosing which streets to target to maximize disruption. We prove that finding an optimal solution is computationally intractable, implying that the adversary has no choice but to settle for suboptimal heuristics. We analyze one such heuristic, and compare the cases when targets are spread across the city of Chicago vs. concentrated in its business district. Surprisingly, the latter results in more far-reaching disruption, with its impact felt as far as 2 kilometers from the closest target. Our findings demonstrate that vulnerabilities in critical infrastructure may arise not only from hardware and software, but also from behavioral manipulation.
69 - B. Zhou , Y. Q. Lv , Y. C. Mao 2021
The k-shell decomposition plays an important role in unveiling the structural properties of a network, i.e., it is widely adopted to find the densest part of a network across a broad range of scientific fields, including Internet, biological networks , social networks, etc. However, there arises concern about the robustness of the k-shell structure when networks suffer from adversarial attacks. Here, we introduce and formalize the problem of the k-shell attack and develop an efficient strategy to attack the k-shell structure by rewiring a small number of links. To the best of our knowledge, it is the first time to study the robustness of graph k-shell structure under adversarial attacks. In particular, we propose a Simulated Annealing (SA) based k-shell attack method and testify it on four real-world social networks. The extensive experiments validate that the k-shell structure of a network is robust under random perturbation, but it is quite vulnerable under adversarial attack, e.g., in Dolphin and Throne networks, more than 40% nodes change their k-shell values when only 10% links are changed based on our SA-based k-shell attack. Such results suggest that a single structural feature could also be significantly disturbed when only a small fraction of links are changed purposefully in a network. Therefore, it could be an interesting topic to improve the robustness of various network properties against adversarial attack in the future.
Empirical estimation of critical points at which complex systems abruptly flip from one state to another is among the remaining challenges in network science. However, due to the stochastic nature of critical transitions it is widely believed that cr itical points are difficult to estimate, and it is even more difficult, if not impossible, to predict the time such transitions occur [1-4]. We analyze a class of decaying dynamical networks experiencing persistent attacks in which the magnitude of the attack is quantified by the probability of an internal failure, and there is some chance that an internal failure will be permanent. When the fraction of active neighbors declines to a critical threshold, cascading failures trigger a network breakdown. For this class of network we find both numerically and analytically that the time to the network breakdown, equivalent to the network lifetime, is inversely dependent upon the magnitude of the attack and logarithmically dependent on the threshold. We analyze how permanent attacks affect dynamical network robustness and use the network lifetime as a measure of dynamical network robustness offering new methodological insight into system dynamics.
State-of-the-art deep neural networks (DNNs) have been proved to have excellent performance on unsupervised domain adaption (UDA). However, recent work shows that DNNs perform poorly when being attacked by adversarial samples, where these attacks are implemented by simply adding small disturbances to the original images. Although plenty of work has focused on this, as far as we know, there is no systematic research on the robustness of unsupervised domain adaption model. Hence, we discuss the robustness of unsupervised domain adaption against adversarial attacking for the first time. We benchmark various settings of adversarial attack and defense in domain adaption, and propose a cross domain attack method based on pseudo label. Most importantly, we analyze the impact of different datasets, models, attack methods and defense methods. Directly, our work proves the limited robustness of unsupervised domain adaptation model, and we hope our work may facilitate the community to pay more attention to improve the robustness of the model against attacking.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا