Do you want to publish a course? Click here

Direction Matters: On Influence-Preserving Graph Summarization and Max-cut Principle for Directed Graphs

370   0   0.0 ( 0 )
 Added by Wenkai Xu
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Summarizing large-scaled directed graphs into small-scale representations is a useful but less studied problem setting. Conventional clustering approaches, which based on Min-Cut-style criteria, compress both the vertices and edges of the graph into the communities, that lead to a loss of directed edge information. On the other hand, compressing the vertices while preserving the directed edge information provides a way to learn the small-scale representation of a directed graph. The reconstruction error, which measures the edge information preserved by the summarized graph, can be used to learn such representation. Compared to the original graphs, the summarized graphs are easier to analyze and are capable of extracting group-level features which is useful for efficient interventions of population behavior. In this paper, we present a model, based on minimizing reconstruction error with non-negative constraints, which relates to a Max-Cut criterion that simultaneously identifies the compressed nodes and the directed compressed relations between these nodes. A multiplicative update algorithm with column-wise normalization is proposed. We further provide theoretical results on the identifiability of the model and on the convergence of the proposed algorithms. Experiments are conducted to demonstrate the accuracy and robustness of the proposed method.



rate research

Read More

We develop a Bregman proximal gradient method for structure learning on linear structural causal models. While the problem is non-convex, has high curvature and is in fact NP-hard, Bregman gradient methods allow us to neutralize at least part of the impact of curvature by measuring smoothness against a highly nonlinear kernel. This allows the method to make longer steps and significantly improves convergence. Each iteration requires solving a Bregman proximal step which is convex and efficiently solvable for our particular choice of kernel. We test our method on various synthetic and real data sets.
111 - Xiaohui Chen , Xu Han , Jiajing Hu 2021
A graph generative model defines a distribution over graphs. One type of generative model is constructed by autoregressive neural networks, which sequentially add nodes and edges to generate a graph. However, the likelihood of a graph under the autoregressive model is intractable, as there are numerous sequences leading to the given graph; this makes maximum likelihood estimation challenging. Instead, in this work we derive the exact joint probability over the graph and the node ordering of the sequential process. From the joint, we approximately marginalize out the node orderings and compute a lower bound on the log-likelihood using variational inference. We train graph generative models by maximizing this bound, without using the ad-hoc node orderings of previous methods. Our experiments show that the log-likelihood bound is significantly tighter than the bound of previous schemes. Moreover, the models fitted with the proposed algorithm can generate high-quality graphs that match the structures of target graphs not seen during training. We have made our code publicly available at hyperref[https://github.com/tufts-ml/graph-generation-vi]{https://github.com/tufts-ml/graph-generation-vi}.
The Ising antiferromagnet is an important statistical physics model with close connections to the {sc Max Cut} problem. Combining spatial mixing arguments with the method of moments and the interpolation method, we pinpoint the replica symmetry breaking phase transition predicted by physicists. Additionally, we rigorously establish upper bounds on the {sc Max Cut} of random regular graphs predicted by Zdeborova and Boettcher [Journal of Statistical Mechanics 2010]. As an application we prove that the information-theoretic threshold of the disassortative stochastic block model on random regular graphs coincides with the Kesten-Stigum bound.
In this paper, we consider the problem of designing cut sparsifiers and sketches for directed graphs. To bypass known lower bounds, we allow the sparsifier/sketch to depend on the balance of the input graph, which smoothly interpolates between undirected and directed graphs. We give nearly matching upper and lower bounds for both for-all (cf. Benczur and Karger, STOC 1996) and for-each (Andoni et al., ITCS 2016) cut sparsifiers/sketches as a function of cut balance, defined the maximum ratio of the cut value in the two directions of a directed graph (Ene et al., STOC 2016). We also show an interesting application of digraph sparsification via cut balance by using it to give a very short proof of a celebrated maximum flow result of Karger and Levine (STOC 2002).
In this article, we propose a new hypothesis testing method for directed acyclic graph (DAG). While there is a rich class of DAG estimation methods, there is a relative paucity of DAG inference solutions. Moreover, the existing methods often impose some specific model structures such as linear models or additive models, and assume independent data observations. Our proposed test instead allows the associations among the random variables to be nonlinear and the data to be time-dependent. We build the test based on some highly flexible neural networks learners. We establish the asymptotic guarantees of the test, while allowing either the number of subjects or the number of time points for each subject to diverge to infinity. We demonstrate the efficacy of the test through simulations and a brain connectivity network analysis.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا