ترغب بنشر مسار تعليمي؟ اضغط هنا

The HyperKron Graph Model for higher-order features

91   0   0.0 ( 0 )
 نشر من قبل Nicole Eikmeier
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

Graph models have long been used in lieu of real data which can be expensive and hard to come by. A common class of models constructs a matrix of probabilities, and samples an adjacency matrix by flipping a weighted coin for each entry. Examples include the ErdH{o}s-R{e}nyi model, Chung-Lu model, and the Kronecker model. Here we present the HyperKron Graph model: an extension of the Kronecker Model, but with a distribution over hyperedges. We prove that we can efficiently generate graphs from this model in order proportional to the number of edges times a small log-factor, and find that in practice the runtime is linear with respect to the number of edges. We illustrate a number of useful features of the HyperKron model including non-trivial clustering and highly skewed degree distributions. Finally, we fit the HyperKron model to real-world networks, and demonstrate the models flexibility with a complex application of the HyperKron model to networks with coherent feed-forward loops.



قيم البحث

اقرأ أيضاً

Network dismantling aims to scratch the network into unconnected fragments by removing an optimal set of nodes and has been widely adopted in many real-world applications such as epidemic control and rumor containment. However, conventional methods o ften disassemble the system from the perspective of classic networks, which have only pairwise interactions, and often ignored the more ubiquitous and nature group-wise interactions modeled by hypernetwork. Moreover, a simple network cant describe the collective behavior of multiple objects, it is necessary to solve related problems through hypernetwork dismantling. In this work, we designed a higher order collective influence measure to identify key node sets in hypernetwork. It comprehensively consider the environment in which the target node is located and its own characteristics to determine the importance of the node, so as to dismantle the hypernetwork by removing these selected nodes. Finally, we used the method to carry out a series of real-world hypernetwork dismantling tasks. Experimental results on five real-world hypernetworks demonstrate the effectiveness of our proposed measure.
Graph models are widely used to analyse diffusion processes embedded in social contacts and to develop applications. A range of graph models are available to replicate the underlying social structures and dynamics realistically. However, most of the current graph models can only consider concurrent interactions among individuals in the co-located interaction networks. However, they do not account for indirect interactions that can transmit spreading items to individuals who visit the same locations at different times but within a certain time limit. The diffusion phenomena occurring through direct and indirect interactions is called same place different time (SPDT) diffusion. This paper introduces a model to synthesize co-located interaction graphs capturing both direct interactions, where individuals meet at a location, and indirect interactions, where individuals visit the same location at different times within a set timeframe. We analyze 60 million location updates made by 2 million users from a social networking application to characterize the graph properties, including the space-time correlations and its time evolving characteristics, such as bursty or ongoing behaviors. The generated synthetic graph reproduces diffusion dynamics of a realistic contact graph, and reduces the prediction error by up to 82% when compare to other contact graph models demonstrating its potential for forecasting epidemic spread.
Graph models, like other machine learning models, have implicit and explicit biases built-in, which often impact performance in nontrivial ways. The models faithfulness is often measured by comparing the newly generated graph against the source graph using any number or combination of graph properties. Differences in the size or topology of the generated graph therefore indicate a loss in the model. Yet, in many systems, errors encoded in loss functions are subtle and not well understood. In the present work, we introduce the Infinity Mirror test for analyzing the robustness of graph models. This straightforward stress test works by repeatedly fitting a model to its own outputs. A hypothetically perfect graph model would have no deviation from the source graph; however, the models implicit biases and assumptions are exaggerated by the Infinity Mirror test, exposing potential issues that were previously obscured. Through an analysis of thousands of experiments on synthetic and real-world graphs, we show that several conventional graph models degenerate in exciting and informative ways. We believe that the observed degenerative patterns are clues to the future development of better graph models.
The topological structure of complex networks has fascinated researchers for several decades, resulting in the discovery of many universal properties and reoccurring characteristics of different kinds of networks. However, much less is known today ab out the network dynamics: indeed, complex networks in reality are not static, but rather dynamically evolve over time. Our paper is motivated by the empirical observation that network evolution patterns seem far from random, but exhibit structure. Moreover, the specific patterns appear to depend on the network type, contradicting the existence of a one fits it all model. However, we still lack observables to quantify these intuitions, as well as metrics to compare graph evolutions. Such observables and metrics are needed for extrapolating or predicting evolutions, as well as for interpolating graph evolutions. To explore the many faces of graph dynamics and to quantify temporal changes, this paper suggests to build upon the concept of centrality, a measure of node importance in a network. In particular, we introduce the notion of centrality distance, a natural similarity measure for two graphs which depends on a given centrality, characterizing the graph type. Intuitively, centrality distances reflect the extent to which (non-anonymous) node roles are different or, in case of dynamic graphs, have changed over time, between two graphs. We evaluate the centrality distance approach for five evolutionary models and seven real-world social and physical networks. Our results empirically show the usefulness of centrality distances for characterizing graph dynamics compared to a null-model of random evolution, and highlight the differences between the considered scenarios. Interestingly, our approach allows us to compare the dynamics of very different networks, in terms of scale and evolution speed.
The objective of this research is to explore the temporal importance of community-scale human activity features for rapid assessment of flood impacts. Ultimate flood impact data, such as flood inundation maps and insurance claims, becomes available o nly weeks and months after the floods have receded. Crisis response managers, however, need near-real-time data to prioritize emergency response. This time lag creates a need for rapid flood impact assessment. Some recent studies have shown promising results for using human activity fluctuations as indicators of flood impacts. Existing studies, however, used mainly a single community-scale activity feature for the estimation of flood impacts and have not investigated their temporal importance for indicating flood impacts. Hence, in this study, we examined the importance of heterogeneous human activity features in different flood event stages. Using four community-scale big data categories we derived ten features related to the variations in human activity and evaluated their temporal importance for rapid assessment of flood impacts. Using multiple random forest models, we examined the temporal importance of each feature in indicating the extent of flood impacts in the context of the 2017 Hurricane Harvey in Harris County, Texas. Our findings reveal that 1) fluctuations in human activity index and percentage of congested roads are the most important indicators for rapid flood impact assessment during response and recovery stages; 2) variations in credit card transactions assumed a middle ranking; and 3) patterns of geolocated social media posts (Twitter) were of low importance across flood stages. The results of this research could rapidly forge a multi-tool enabling crisis managers to identify hotspots with severe flood impacts at various stages then to plan and prioritize effective response strategies.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا