ترغب بنشر مسار تعليمي؟ اضغط هنا

Systematic comparison of graph embedding methods in practical tasks

65   0   0.0 ( 0 )
 نشر من قبل Filippo Radicchi
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Network embedding techniques aim at representing structural properties of graphs in geometric space. Those representations are considered useful in downstream tasks such as link prediction and clustering. However, the number of graph embedding methods available on the market is large, and practitioners face the non-trivial choice of selecting the proper approach for a given application. The present work attempts to close this gap of knowledge through a systematic comparison of eleven different methods for graph embedding. We consider methods for embedding networks in the hyperbolic and Euclidean metric spaces, as well as non-metric community-based embedding methods. We apply these methods to embed more than one hundred real-world and synthetic networks. Three common downstream tasks -- mapping accuracy, greedy routing, and link prediction -- are considered to evaluate the quality of the various embedding methods. Our results show that some Euclidean embedding methods excel in greedy routing. As for link prediction, community-based and hyperbolic embedding methods yield overall performance superior than that of Euclidean-space-based approaches. We compare the running time for different methods and further analyze the impact of different network characteristics such as degree distribution, modularity, and clustering coefficients on the quality of the different embedding methods. We release our evaluation framework to provide a standardized benchmark for arbitrary embedding methods.

قيم البحث

اقرأ أيضاً

Quantifying the differences between networks is a challenging and ever-present problem in network science. In recent years a multitude of diverse, ad hoc solutions to this problem have been introduced. Here we propose that simple and well-understood ensembles of random networks (such as ErdH{o}s-R{e}nyi graphs, random geometric graphs, Watts-Strogatz graphs, the configuration model, and preferential attachment networks) are natural benchmarks for network comparison methods. Moreover, we show that the expected distance between two networks independently sampled from a generative model is a useful property that encapsulates many key features of that model. To illustrate our results, we calculate this within-ensemble graph distance and related quantities for classic network models (and several parameterizations thereof) using 20 distance measures commonly used to compare graphs. The within-ensemble graph distance provides a new framework for developers of graph distances to better understand their creations and for practitioners to better choose an appropriate tool for their particular task.
Graph is a natural representation of data for a variety of real-word applications, such as knowledge graph mining, social network analysis and biological network comparison. For these applications, graph embedding is crucial as it provides vector rep resentations of the graph. One limitation of existing graph embedding methods is that their embedding optimization procedures are disconnected from the target application. In this paper, we propose a novel approach, namely Customized Graph Embedding (CGE) to tackle this problem. The CGE algorithm learns customized vector representations of graph nodes by differentiating the importance of distinct graph paths automatically for a specific application. Extensive experiments were carried out on a diverse set of node classification datasets, which demonstrate strong performances of CGE and provide deep insights into the model.
We introduce two models of inclusion hierarchies: Random Graph Hierarchy (RGH) and Limited Random Graph Hierarchy (LRGH). In both models a set of nodes at a given hierarchy level is connected randomly, as in the ErdH{o}s-R{e}nyi random graph, with a fixed average degree equal to a system parameter $c$. Clusters of the resulting network are treated as nodes at the next hierarchy level and they are connected again at this level and so on, until the process cannot continue. In the RGH model we use all clusters, including those of size $1$, when building the next hierarchy level, while in the LRGH model clusters of size $1$ stop participating in further steps. We find that in both models the number of nodes at a given hierarchy level $h$ decreases approximately exponentially with $h$. The height of the hierarchy $H$, i.e. the number of all hierarchy levels, increases logarithmically with the system size $N$, i.e. with the number of nodes at the first level. The height $H$ decreases monotonically with the connectivity parameter $c$ in the RGH model and it reaches a maximum for a certain $c_{max}$ in the LRGH model. The distribution of separate cluster sizes in the LRGH model is a power law with an exponent about $-1.25$. The above results follow from approximate analytical calculations and have been confirmed by numerical simulations.
Evaluation of Bayesian deep learning (BDL) methods is challenging. We often seek to evaluate the methods robustness and scalability, assessing whether new tools give `better uncertainty estimates than old ones. These evaluations are paramount for pra ctitioners when choosing BDL tools on-top of which they build their applications. Current popular evaluations of BDL methods, such as the UCI experiments, are lacking: Methods that excel with these experiments often fail when used in application such as medical or automotive, suggesting a pertinent need for new benchmarks in the field. We propose a new BDL benchmark with a diverse set of tasks, inspired by a real-world medical imaging application on emph{diabetic retinopathy diagnosis}. Visual inputs (512x512 RGB images of retinas) are considered, where model uncertainty is used for medical pre-screening---i.e. to refer patients to an expert when model diagnosis is uncertain. Methods are then ranked according to metrics derived from expert-domain to reflect real-world use of model uncertainty in automated diagnosis. We develop multiple tasks that fall under this application, including out-of-distribution detection and robustness to distribution shift. We then perform a systematic comparison of well-tuned BDL techniques on the various tasks. From our comparison we conclude that some current techniques which solve benchmarks such as UCI `overfit their uncertainty to the dataset---when evaluated on our benchmark these underperform in comparison to simpler baselines. The code for the benchmark, its baselines, and a simple API for evaluating new BDL tools are made available at https://github.com/oatml/bdl-benchmarks.
Online users are typically active on multiple social media networks (SMNs), which constitute a multiplex social network. It is becoming increasingly challenging to determine whether given accounts on different SMNs belong to the same user; this can b e expressed as an interlayer link prediction problem in a multiplex network. To address the challenge of predicting interlayer links , feature or structure information is leveraged. Existing methods that use network embedding techniques to address this problem focus on learning a mapping function to unify all nodes into a common latent representation space for prediction; positional relationships between unmatched nodes and their common matched neighbors (CMNs) are not utilized. Furthermore, the layers are often modeled as unweighted graphs, ignoring the strengths of the relationships between nodes. To address these limitations, we propose a framework based on multiple types of consistency between embedding vectors (MulCEV). In MulCEV, the traditional embedding-based method is applied to obtain the degree of consistency between the vectors representing the unmatched nodes, and a proposed distance consistency index based on the positions of nodes in each latent space provides additional clues for prediction. By associating these two types of consistency, the effective information in the latent spaces is fully utilized. Additionally, MulCEV models the layers as weighted graphs to obtain better representation. In this way, the higher the strength of the relationship between nodes, the more similar their embedding vectors in the latent representation space will be. The results of our experiments on several real-world datasets demonstrate that the proposed MulCEV framework markedly outperforms current embedding-based methods, especially when the number of training iterations is small.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا