Do you want to publish a course? Click here

Graph Neural Network with Automorphic Equivalence Filters

72   0   0.0 ( 0 )
 Added by Fengli Xu
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Graph neural network (GNN) has recently been established as an effective representation learning framework on graph data. However, the popular message passing models rely on local permutation invariant aggregate functions, which gives rise to the concerns about their representational power. Here, we introduce the concept of automorphic equivalence to theoretically analyze GNNs expressiveness in differentiating nodes structural role. We show that the existing message passing GNNs have limitations in learning expressive representations. Moreover, we design a novel GNN class that leverages learnable automorphic equivalence filters to explicitly differentiate the structural roles of each nodes neighbors, and uses a squeeze-and-excitation module to fuse various structural information. We theoretically prove that the proposed model is expressive in terms of generating distinct representations for nodes with different structural feature. Besides, we empirically validate our model on eight real-world graph data, including social network, e-commerce co-purchase network and citation network, and show that it consistently outperforms strong baselines.



rate research

Read More

125 - Zezhi Shao , Yongjun Xu , Wei Wei 2021
Graph neural networks for heterogeneous graph embedding is to project nodes into a low-dimensional space by exploring the heterogeneity and semantics of the heterogeneous graph. However, on the one hand, most of existing heterogeneous graph embedding methods either insufficiently model the local structure under specific semantic, or neglect the heterogeneity when aggregating information from it. On the other hand, representations from multiple semantics are not comprehensively integrated to obtain versatile node embeddings. To address the problem, we propose a Heterogeneous Graph Neural Network with Multi-View Representation Learning (named MV-HetGNN) for heterogeneous graph embedding by introducing the idea of multi-view representation learning. The proposed model consists of node feature transformation, view-specific ego graph encoding and auto multi-view fusion to thoroughly learn complex structural and semantic information for generating comprehensive node representations. Extensive experiments on three real-world heterogeneous graph datasets show that the proposed MV-HetGNN model consistently outperforms all the state-of-the-art GNN baselines in various downstream tasks, e.g., node classification, node clustering, and link prediction.
Graph neural networks (GNNs) have been successfully employed in a myriad of applications involving graph-structured data. Theoretical findings establish that GNNs use nonlinear activation functions to create low-eigenvalue frequency content that can be processed in a stable manner by subsequent graph convolutional filters. However, the exact shape of the frequency content created by nonlinear functions is not known, and thus, it cannot be learned nor controlled. In this work, node-variant graph filters (NVGFs) are shown to be capable of creating frequency content and are thus used in lieu of nonlinear activation functions. This results in a novel GNN architecture that, although linear, is capable of creating frequency content as well. Furthermore, this new frequency content can be either designed or learned from data. In this way, the role of frequency creation is separated from the nonlinear nature of traditional GNNs. Extensive simulations are carried out to differentiate the contributions of frequency creation from those of the nonlinearity.
Network data can be conveniently modeled as a graph signal, where data values are assigned to nodes of a graph that describes the underlying network topology. Successful learning from network data is built upon methods that effectively exploit this graph structure. In this work, we leverage graph signal processing to characterize the representation space of graph neural networks (GNNs). We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology. These two properties offer insight about the workings of GNNs and help explain their scalability and transferability properties which, coupled with their local and distributed nature, make GNNs powerful tools for learning in physical networks. We also introduce GNN extensions using edge-varying and autoregressive moving average graph filters and discuss their properties. Finally, we study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
114 - Wenqi Fan , Wei Jin , Xiaorui Liu 2021
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks. Despite the great success, recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs prediction by modifying graphs. On the other hand, the explanation of GNNs (GNNExplainer) provides a better understanding of a trained GNN model by generating a small subgraph and features that are most influential for its prediction. In this paper, we first perform empirical studies to validate that GNNExplainer can act as an inspection tool and have the potential to detect the adversarial perturbations for graphs. This finding motivates us to further initiate a new problem investigation: Whether a graph neural network and its explanations can be jointly attacked by modifying graphs with malicious desires? It is challenging to answer this question since the goals of adversarial attacks and bypassing the GNNExplainer essentially contradict each other. In this work, we give a confirmative answer to this question by proposing a novel attack framework (GEAttack), which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities. Extensive experiments on two explainers (GNNExplainer and PGExplainer) under various real-world datasets demonstrate the effectiveness of the proposed method.
In graph neural networks (GNNs), message passing iteratively aggregates nodes information from their direct neighbors while neglecting the sequential nature of multi-hop node connections. Such sequential node connections e.g., metapaths, capture critical insights for downstream tasks. Concretely, in recommender systems (RSs), disregarding these insights leads to inadequate distillation of collaborative signals. In this paper, we employ collaborative subgraphs (CSGs) and metapaths to form metapath-aware subgraphs, which explicitly capture sequential semantics in graph structures. We propose metatextbf{P}ath and textbf{E}ntity-textbf{A}ware textbf{G}raph textbf{N}eural textbf{N}etwork (PEAGNN), which trains multilayer GNNs to perform metapath-aware information aggregation on such subgraphs. This aggregated information from different metapaths is then fused using attention mechanism. Finally, PEAGNN gives us the representations for node and subgraph, which can be used to train MLP for predicting score for target user-item pairs. To leverage the local structure of CSGs, we present entity-awareness that acts as a contrastive regularizer on node embedding. Moreover, PEAGNN can be combined with prominent layers such as GAT, GCN and GraphSage. Our empirical evaluation shows that our proposed technique outperforms competitive baselines on several datasets for recommendation tasks. Further analysis demonstrates that PEAGNN also learns meaningful metapath combinations from a given set of metapaths.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا