ﻻ يوجد ملخص باللغة العربية
In social networks, by removing some target-sensitive links, privacy protection might be achieved. However, some hidden links can still be re-observed by link prediction methods on observable networks. In this paper, the conventional link prediction method named Resource Allocation Index (RA) is adopted for privacy attacks. Several defense methods are proposed, including heuristic and evolutionary approaches, to protect targeted links from RA attacks via evolutionary perturbations. This is the first time to study privacy protection on targeted links against link-prediction-based attacks. Some links are randomly selected from the network as targeted links for experimentation. The simulation results on six real-world networks demonstrate the superiority of the evolutionary perturbation approach for target defense against RA attacks. Moreover, transferring experiments show that, although the evolutionary perturbation approach is designed to against RA attacks, it is also effective against other link-prediction-based attacks.
Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link pred
Information entropy has been proved to be an effective tool to quantify the structural importance of complex networks. In the previous work (Xu et al, 2016 cite{xu2016}), we measure the contribution of a path in link prediction with information entro
With the boom of edge intelligence, its vulnerability to adversarial attacks becomes an urgent problem. The so-called adversarial example can fool a deep learning model on the edge node to misclassify. Due to the property of transferability, the adve
State-of-the-art link prediction utilizes combinations of complex features derived from network panel data. We here show that computationally less expensive features can achieve the same performance in the common scenario in which the data is availab
Humans rely heavily on shape information to recognize objects. Conversely, convolutional neural networks (CNNs) are biased more towards texture. This is perhaps the main reason why CNNs are vulnerable to adversarial examples. Here, we explore how sha