ﻻ يوجد ملخص باللغة العربية
Adversarial attacks have been alerting the artificial intelligence community recently, since many machine learning algorithms were found vulnerable to malicious attacks. This paper studies adversarial attacks to scale-free networks to test their robustness in terms of statistical measures. In addition to the well-known random link rewiring (RLR) attack, two heuristic attacks are formulated and simulated: degree-addition-based link rewiring (DALR) and degree-interval-based link rewiring (DILR). These three strategies are applied to attack a number of strong scale-free networks of various sizes generated from the Barabasi-Albert model. It is found that both DALR and DILR are more effective than RLR, in the sense that rewiring a smaller number of links can succeed in the same attack. However, DILR is as concealed as RLR in the sense that they both are constructed by introducing a relatively small number of changes on several typical structural properties such as average shortest path-length, average clustering coefficient, and average diagonal distance. The results of this paper suggest that to classify a network to be scale-free has to be very careful from the viewpoint of adversarial attack effects.
Recently, Broido & Clauset (2019) mentioned that (strict) Scale-Free networks were rare, in real life. This might be related to the statement of Stumpf, Wiuf & May (2005), that sub-networks of scale-free networks are not scale-free. In the later, tho
Disinformation continues to attract attention due to its increasing threat to society. Nevertheless, a disinformation-based attack on critical infrastructure has never been studied to date. Here, we consider traffic networks and focus on fake informa
The k-shell decomposition plays an important role in unveiling the structural properties of a network, i.e., it is widely adopted to find the densest part of a network across a broad range of scientific fields, including Internet, biological networks
Empirical estimation of critical points at which complex systems abruptly flip from one state to another is among the remaining challenges in network science. However, due to the stochastic nature of critical transitions it is widely believed that cr
State-of-the-art deep neural networks (DNNs) have been proved to have excellent performance on unsupervised domain adaption (UDA). However, recent work shows that DNNs perform poorly when being attacked by adversarial samples, where these attacks are