Do you want to publish a course? Click here

RaWaNet: Enriching Graph Neural Network Input via Random Walks on Graphs

136   0   0.0 ( 0 )
 Added by Martin Stoll
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

In recent years, graph neural networks (GNNs) have gained increasing popularity and have shown very promising results for data that are represented by graphs. The majority of GNN architectures are designed based on developing new convolutional and/or pooling layers that better extract the hidden and deeper representations of the graphs to be used for different prediction tasks. The inputs to these layers are mainly the three default descriptors of a graph, node features $(X)$, adjacency matrix $(A)$, and edge features $(W)$ (if available). To provide a more enriched input to the network, we propose a random walk data processing of the graphs based on three selected lengths. Namely, (regular) walks of length 1 and 2, and a fractional walk of length $gamma in (0,1)$, in order to capture the different local and global dynamics on the graphs. We also calculate the stationary distribution of each random walk, which is then used as a scaling factor for the initial node features ($X$). This way, for each graph, the network receives multiple adjacency matrices along with their individual weighting for the node features. We test our method on various molecular datasets by passing the processed node features to the network in order to perform several classification and regression tasks. Interestingly, our method, not using edge features which are heavily exploited in molecular graph learning, let a shallow network outperform well known deep GNNs.



rate research

Read More

We study the problem of semi-supervised learning on graphs, for which graph neural networks (GNNs) have been extensively explored. However, most existing GNNs inherently suffer from the limitations of over-smoothing, non-robustness, and weak-generalization when labeled nodes are scarce. In this paper, we propose a simple yet effective framework---GRAPH RANDOM NEURAL NETWORKS (GRAND)---to address these issues. In GRAND, we first design a random propagation strategy to perform graph data augmentation. Then we leverage consistency regularization to optimize the prediction consistency of unlabeled nodes across different data augmentations. Extensive experiments on graph benchmark datasets suggest that GRAND significantly outperforms state-of-the-art GNN baselines on semi-supervised node classification. Finally, we show that GRAND mitigates the issues of over-smoothing and non-robustness, exhibiting better generalization behavior than existing GNNs. The source code of GRAND is publicly available at https://github.com/Grand20/grand.
Many learning tasks require us to deal with graph data which contains rich relational information among elements, leading increasing graph neural network (GNN) models to be deployed in industrial products for improving the quality of service. However, they also raise challenges to model authentication. It is necessary to protect the ownership of the GNN models, which motivates us to present a watermarking method to GNN models in this paper. In the proposed method, an Erdos-Renyi (ER) random graph with random node feature vectors and labels is randomly generated as a trigger to train the GNN to be protected together with the normal samples. During model training, the secret watermark is embedded into the label predictions of the ER graph nodes. During model verification, by activating a marked GNN with the trigger ER graph, the watermark can be reconstructed from the output to verify the ownership. Since the ER graph was randomly generated, by feeding it to a non-marked GNN, the label predictions of the graph nodes are random, resulting in a low false alarm rate (of the proposed work). Experimental results have also shown that, the performance of a marked GNN on its original task will not be impaired. Moreover, it is robust against model compression and fine-tuning, which has shown the superiority and applicability.
We study random walks on the giant component of the ErdH{o}s-Renyi random graph ${cal G}(n,p)$ where $p=lambda/n$ for $lambda>1$ fixed. The mixing time from a worst starting point was shown by Fountoulakis and Reed, and independently by Benjamini, Kozma and Wormald, to have order $log^2 n$. We prove that starting from a uniform vertex (equivalently, from a fixed vertex conditioned to belong to the giant) both accelerates mixing to $O(log n)$ and concentrates it (the cutoff phenomenon occurs): the typical mixing is at $( u {bf d})^{-1}log n pm (log n)^{1/2+o(1)}$, where $ u$ and ${bf d}$ are the speed of random walk and dimension of harmonic measure on a ${rm Poisson}(lambda)$-Galton-Watson tree. Analogous results are given for graphs with prescribed degree sequences, where cutoff is shown both for the simple and for the non-backtracking random walk.
165 - Yiye Jiang 2020
This paper is concerned by the problem of selecting an optimal sampling set of sensors over a network of time series for the purpose of signal recovery at non-observed sensors with a minimal reconstruction error. The problem is motivated by applications where time-dependent graph signals are collected over redundant networks. In this setting, one may wish to only use a subset of sensors to predict data streams over the whole collection of nodes in the underlying graph. A typical application is the possibility to reduce the power consumption in a network of sensors that may have limited battery supplies. We propose and compare various data-driven strategies to turn off a fixed number of sensors or equivalently to select a sampling set of nodes. We also relate our approach to the existing literature on sensor selection from multivariate data with a (possibly) underlying graph structure. Our methodology combines tools from multivariate time series analysis, graph signal processing, statistical learning in high-dimension and deep learning. To illustrate the performances of our approach, we report numerical experiments on the analysis of real data from bike sharing networks in different cities.
In recent years, protocols that are based on the properties of random walks on graphs have found many applications in communication and information networks, such as wireless networks, peer-to-peer networks and the Web. For wireless networks (and other networks), graphs are actually not the correct model of the communication; instead hyper-graphs better capture the communication over a wireless shared channel. Motivated by this example, we study in this paper random walks on hyper-graphs. First, we formalize the random walk process on hyper-graphs and generalize key notions from random walks on graphs. We then give the novel definition of radio cover time, namely, the expected time of a random walk to be heard (as opposed to visit) by all nodes. We then provide some basic bounds on the radio cover, in particular, we show that while on graphs the radio cover time is O(mn), in hyper-graphs it is O(mnr) where n, m and r are the number of nodes, the number of edges and the rank of the hyper-graph, respectively. In addition, we define radio hitting times and give a polynomial algorithm to compute them. We conclude the paper with results on specific hyper-graphs that model wireless networks in one and two dimensions.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا