No Arabic abstract
Considering the wide application of network embedding methods in graph data mining, inspired by the adversarial attack in deep learning, this paper proposes a Genetic Algorithm (GA) based Euclidean Distance Attack strategy (EDA) to attack the network embedding, so as to prevent certain structural information from being discovered. EDA focuses on disturbing the Euclidean distance between a pair of nodes in the embedding space as much as possible through minimal modifications of the network structure. Since a large number of downstream network algorithms, such as community detection and node classification, rely on the Euclidean distance between nodes to evaluate the similarity between them in the embedding space, EDA can be considered as a universal attack on a variety of network algorithms. Different from traditional supervised attack strategies, EDA does not need labeling information, and, in other words, is an unsupervised network embedding attack method.
Neural node embeddings have recently emerged as a powerful representation for supervised learning tasks involving graph-structured data. We leverage this recent advance to develop a novel algorithm for unsupervised community discovery in graphs. Through extensive experimental studies on simulated and real-world data, we demonstrate that the proposed approach consistently improves over the current state-of-the-art. Specifically, our approach empirically attains the information-theoretic limits for community recovery under the benchmark Stochastic Block Models for graph generation and exhibits better stability and accuracy over both Spectral Clustering and Acyclic Belief Propagation in the community recovery limits.
We study the problem of embedding edgeless nodes such as users who newly enter the underlying network, while using graph neural networks (GNNs) widely studied for effective representation learning of graphs thanks to its highly expressive capability via message passing. Our study is motivated by the fact that existing GNNs cannot be adopted for our problem since message passing to such edgeless nodes having no connections is impossible. To tackle this challenge, we propose Edgeless-GNN, a new framework that enables GNNs to generate node embeddings even for edgeless nodes through unsupervised inductive learning. Specifically, we start by constructing a $k$-nearest neighbor graph ($k$NNG) based on the similarity of node attributes to replace the GNNs computation graph defined by the neighborhood-based aggregation of each node. As our main contributions, the known network structure is used to train model parameters, while a new loss function is established using energy-based learning in such a way that our model learns the network structure. For the edgeless nodes, we inductively infer embeddings for the edgeless nodes by using edges via $k$NNG construction as a computation graph. By evaluating the performance of various downstream machine learning (ML) tasks, we empirically demonstrate that Edgeless-GNN consistently outperforms state-of-the-art methods of inductive network embedding. Moreover, our findings corroborate the effectiveness of Edgeless-GNN in judiciously combining the replaced computation graph with our newly designed loss. Our framework is GNN-model-agnostic; thus, GNN models can be appropriately chosen according to ones needs and ML tasks.
Networks such as social networks, airplane networks, and citation networks are ubiquitous. The adjacency matrix is often adopted to represent a network, which is usually high dimensional and sparse. However, to apply advanced machine learning algorithms to network data, low-dimensional and continuous representations are desired. To achieve this goal, many network embedding methods have been proposed recently. The majority of existing methods facilitate the local information i.e. local connections between nodes, to learn the representations, while completely neglecting global information (or node status), which has been proven to boost numerous network mining tasks such as link prediction and social recommendation. Hence, it also has potential to advance network embedding. In this paper, we study the problem of preserving local and global information for network embedding. In particular, we introduce an approach to capture global information and propose a network embedding framework LOG, which can coherently model {bf LO}cal and {bf G}lobal information. Experimental results demonstrate the ability to preserve global information of the proposed framework. Further experiments are conducted to demonstrate the effectiveness of learned representations of the proposed framework.
Community detection, aiming to group nodes based on their connections, plays an important role in network analysis, since communities, treated as meta-nodes, allow us to create a large-scale map of a network to simplify its analysis. However, for privacy reasons, we may want to prevent communities from being discovered in certain cases, leading to the topics on community deception. In this paper, we formalize this community detection attack problem in three scales, including global attack (macroscale), target community attack (mesoscale) and target node attack (microscale). We treat this as an optimization problem and further propose a novel Evolutionary Perturbation Attack (EPA) method, where we generate adversarial networks to realize the community detection attack. Numerical experiments validate that our EPA can successfully attack network community algorithms in all three scales, i.e., hide target nodes or communities and further disturb the community structure of the whole network by only changing a small fraction of links. By comparison, our EPA behaves better than a number of baseline attack methods on six synthetic networks and three real-world networks. More interestingly, although our EPA is based on the louvain algorithm, it is also effective on attacking other community detection algorithms, validating its good transferability.
A main challenge in mining network-based data is finding effective ways to represent or encode graph structures so that it can be efficiently exploited by machine learning algorithms. Several methods have focused in network representation at node/edge or substructure level. However, many real life challenges such as time-varying, multilayer, chemical compounds and brain networks involve analysis of a family of graphs instead of single one opening additional challenges in graph comparison and representation. Traditional approaches for learning representations relies on hand-crafting specialized heuristics to extract meaningful information about the graphs, e.g statistical properties, structural features, etc. as well as engineered graph distances to quantify dissimilarity between networks. In this work we provide an unsupervised approach to learn embedding representation for a collection of graphs so that it can be used in numerous graph mining tasks. By using an unsupervised neural network approach on input graphs, we aim to capture the underlying distribution of the data in order to discriminate between different class of networks. Our method is assessed empirically on synthetic and real life datasets and evaluated in three different tasks: graph clustering, visualization and classification. Results reveal that our method outperforms well known graph distances and graph-kernels in clustering and classification tasks, being highly efficient in runtime.