ترغب بنشر مسار تعليمي؟ اضغط هنا

Two Sides of the Same Coin: Heterophily and Oversmoothing in Graph Convolutional Neural Networks

67   0   0.0 ( 0 )
 نشر من قبل Yujun Yan
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Most graph convolutional neural networks (GCNs) perform poorly in graphs where neighbors typically have different features/classes (heterophily) and when stacking multiple layers (oversmoothing). These two seemingly unrelated problems have been studied independently, but there is recent empirical evidence that solving one problem may benefit the other. In this work, going beyond empirical observations, we aim to: (1) propose a new perspective to analyze the heterophily and oversmoothing problems under a unified theoretical framework, (2) identify the common causes of the two problems based on the proposed framework, and (3) propose simple yet effective strategies that address the common causes. Focusing on the node classification task, we use linear separability of node representations as an indicator to reflect the performance of GCNs and we propose to study the linear separability by analyzing the statistical change of the node representations in the graph convolution. We find that the relative degree of a node (compared to its neighbors) and the heterophily level of a nodes neighborhood are the root causes that influence the separability of node representations. Our analysis suggests that: (1) Nodes with high heterophily always produce less separable representations after graph convolution; (2) Even with low heterophily, degree disparity between nodes can influence the network dynamics and result in a pseudo-heterophily situation, which helps to explain oversmoothing. Based on our insights, we propose simple modifications to the GCN architecture -- i.e., degree corrections and signed messages -- which alleviate the root causes of these issues, and also show this empirically on 9 real networks. Compared to other approaches, which tend to work well in one regime but fail in others, our modified GCN model consistently performs well across all settings.

قيم البحث

اقرأ أيضاً

Community structure is an important feature of many networks. One of the most popular ways to capture community structure is using a quantitative measure, modularity, which can serve as both a standard benchmark comparing different community detectio n algorithms, and a optimization objective for detecting communities. Previous works on modularity mainly focus on the approximation method for modularity maximization to detect communities, or minor modifications to the definition. In this paper, we study modularity from an information-theoretical perspective and show that modularity and mutual information in networks are essentially the same. The main contribution is that we develop a family of generalized modularity measure, $f$-Modularity, which includes the original modularity as a special case. At a high level, we show the significance of community structure is equivalent to the amount of information contained in the network. On the one hand, $f$-Modularity has an information-theoretical interpretation and enjoys the desired properties of mutual information measure. On the other hand, quantifying community structure also provides an approach to estimate the mutual information between discrete random samples with a large value space but given only limited samples. We demonstrate the algorithm for optimizing $f$-Modularity in a relatively general case, and validate it through experimental results on simulated networks. We also apply $f$-Modularity to real-world market networks. Our results bridge two important fields, complex network and information theory, and also shed light on the design of measures on community structure in the future.
Transfer learning has become a common practice for training deep learning models with limited labeled data in a target domain. On the other hand, deep models are vulnerable to adversarial attacks. Though transfer learning has been widely applied, its effect on model robustness is unclear. To figure out this problem, we conduct extensive empirical evaluations to show that fine-tuning effectively enhances model robustness under white-box FGSM attacks. We also propose a black-box attack method for transfer learning models which attacks the target model with the adversarial examples produced by its source model. To systematically measure the effect of both white-box and black-box attacks, we propose a new metric to evaluate how transferable are the adversarial examples produced by a source model to a target model. Empirical results show that the adversarial examples are more transferable when fine-tuning is used than they are when the two networks are trained independently.
Graph convolutional neural networks (GCNs) embed nodes in a graph into Euclidean space, which has been shown to incur a large distortion when embedding real-world graphs with scale-free or hierarchical structure. Hyperbolic geometry offers an excitin g alternative, as it enables embeddings with much smaller distortion. However, extending GCNs to hyperbolic geometry presents several unique challenges because it is not clear how to define neural network operations, such as feature transformation and aggregation, in hyperbolic space. Furthermore, since input features are often Euclidean, it is unclear how to transform the features into hyperbolic embeddings with the right amount of curvature. Here we propose Hyperbolic Graph Convolutional Neural Network (HGCN), the first inductive hyperbolic GCN that leverages both the expressiveness of GCNs and hyperbolic geometry to learn inductive node representations for hierarchical and scale-free graphs. We derive GCN operations in the hyperboloid model of hyperbolic space and map Euclidean input features to embeddings in hyperbolic spaces with different trainable curvature at each layer. Experiments demonstrate that HGCN learns embeddings that preserve hierarchical structure, and leads to improved performance when compared to Euclidean analogs, even with very low dimensional embeddings: compared to state-of-the-art GCNs, HGCN achieves an error reduction of up to 63.1% in ROC AUC for link prediction and of up to 47.5% in F1 score for node classification, also improving state-of-the art on the Pubmed dataset.
Graph convolution networks have recently garnered a lot of attention for representation learning on non-Euclidean feature spaces. Recent research has focused on stacking multiple layers like in convolutional neural networks for the increased expressi ve power of graph convolution networks. However, simply stacking multiple graph convolution layers lead to issues like vanishing gradient, over-fitting and over-smoothing. Such problems are much less when using shallower networks, even though the shallow networks have lower expressive power. In this work, we propose a novel Multipath Graph convolutional neural network that aggregates the output of multiple different shallow networks. We train and test our model on various benchmarks datasets for the task of node property prediction. Results show that the proposed method not only attains increased test accuracy but also requires fewer training epochs to converge. The full implementation is available at https://github.com/rangan2510/MultiPathGCN
Inspired by convolutional neural networks on 1D and 2D data, graph convolutional neural networks (GCNNs) have been developed for various learning tasks on graph data, and have shown superior performance on real-world datasets. Despite their success, there is a dearth of theoretical explorations of GCNN models such as their generalization properties. In this paper, we take a first step towards developing a deeper theoretical understanding of GCNN models by analyzing the stability of single-layer GCNN models and deriving their generalization guarantees in a semi-supervised graph learning setting. In particular, we show that the algorithmic stability of a GCNN model depends upon the largest absolute eigenvalue of its graph convolution filter. Moreover, to ensure the uniform stability needed to provide strong generalization guarantees, the largest absolute eigenvalue must be independent of the graph size. Our results shed new insights on the design of new & improved graph convolution filters with guaranteed algorithmic stability. We evaluate the generalization gap and stability on various real-world graph datasets and show that the empirical results indeed support our theoretical findings. To the best of our knowledge, we are the first to study stability bounds on graph learning in a semi-supervised setting and derive generalization bounds for GCNN models.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا