Distance-wise Graph Contrastive Learning


الملخص بالإنكليزية

Contrastive learning (CL) has proven highly effective in graph-based semi-supervised learning (SSL), since it can efficiently supplement the limited task information from the annotated nodes in graph. However, existing graph CL (GCL) studies ignore the uneven distribution of task information across graph caused by the graph topology and the selection of annotated nodes. They apply CL to the whole graph evenly, which results in an incongruous combination of CL and graph learning. To address this issue, we propose to apply CL in the graph learning adaptively by taking the received task information of each node into consideration. Firstly, we introduce Group PageRank to measure the node information gain from graph and find that CL mainly works for nodes that are topologically far away from the labeled nodes. We then propose our Distance-wise Graph Contrastive Learning (DwGCL) method from two views:(1) From the global view of the task information distribution across the graph, we enhance the CL effect on nodes that are topologically far away from labeled nodes; (2) From the personal view of each nodes received information, we measure the relative distance between nodes and then we adapt the sampling strategy of GCL accordingly. Extensive experiments on five benchmark graph datasets show that DwGCL can bring a clear improvement over previous GCL methods. Our analysis on eight graph neural network with various types of architecture and three different annotation settings further demonstrates the generalizability of DwGCL.

تحميل البحث