No Arabic abstract
Probabilistic inferences distill knowledge from graphs to aid human make important decisions. Due to the inherent uncertainty in the model and the complexity of the knowledge, it is desirable to help the end-users understand the inference outcomes. Different from deep or high-dimensional parametric models, the lack of interpretability in graphical models is due to the cyclic and long-range dependencies and the byzantine inference procedures. Prior works did not tackle cycles and make textit{the} inferences interpretable. To close the gap, we formulate the problem of explaining probabilistic inferences as a constrained cross-entropy minimization problem to find simple subgraphs that faithfully approximate the inferences to be explained. We prove that the optimization is NP-hard, while the objective is not monotonic and submodular to guarantee efficient greedy approximation. We propose a general beam search algorithm to find simple trees to enhance the interpretability and diversity in the explanations, with parallelization and a pruning strategy to allow efficient search on large and dense graphs without hurting faithfulness. We demonstrate superior performance on 10 networks from 4 distinct applications, comparing favorably to other explanation methods. Regarding the usability of the explanation, we visualize the explanation in an interface that allows the end-users to explore the diverse search results and find more personalized and sensible explanations.
Given a graph G where each node is associated with a set of attributes, and a parameter k specifying the number of output clusters, k-attributed graph clustering (k-AGC) groups nodes in G into k disjoint clusters, such that nodes within the same cluster share similar topological and attribute characteristics, while those in different clusters are dissimilar. This problem is challenging on massive graphs, e.g., with millions of nodes and billions of edges. For such graphs, existing solutions either incur prohibitively high costs, or produce clustering results with compromised quality. In this paper, we propose ACMin, an effective approach to k-AGC that yields high-quality clusters with cost linear to the size of the input graph G. The main contributions of ACMin are twofold: (i) a novel formulation of the k-AGC problem based on an attributed multi-hop conductance quality measure custom-made for this problem setting, which effectively captures cluster coherence in terms of both topological proximities and attribute similarities, and (ii) a linear-time optimization solver that obtains high-quality clusters iteratively, based on efficient matrix operations such as orthogonal iterations, an alternative optimization approach, as well as an initialization technique that significantly speeds up the convergence of ACMin in practice. Extensive experiments, comparing 11 competitors on 6 real datasets, demonstrate that ACMin consistently outperforms all competitors in terms of result quality measured against ground-truth labels, while being up to orders of magnitude faster. In particular, on the Microsoft Academic Knowledge Graph dataset with 265.2 million edges and 1.1 billion attribute values, ACMin outputs high-quality results for 5-AGC within 1.68 hours using a single CPU core, while none of the 11 competitors finish within 3 days.
Probabilistic graphical models, such as Markov random fields (MRF), exploit dependencies among random variables to model a rich family of joint probability distributions. Sophisticated inference algorithms, such as belief propagation (BP), can effectively compute the marginal posteriors. Nonetheless, it is still difficult to interpret the inference outcomes for important human decision making. There is no existing method to rigorously attribute the inference outcomes to the contributing factors of the graphical models. Shapley values provide an axiomatic framework, but naively computing or even approximating the values on general graphical models is challenging and less studied. We propose GraphShapley to integrate the decomposability of Shapley values, the structure of MRFs, and the iterative nature of BP inference in a principled way for fast Shapley value computation, that 1) systematically enumerates the important contributions to the Shapley values of the explaining variables without duplicate; 2) incrementally compute the contributions without starting from scratches. We theoretically characterize GraphShapley regarding independence, equal contribution, and additivity. On nine graphs, we demonstrate that GraphShapley provides sensible and practical explanations.
We study the evolution of cooperation in populations where individuals play prisoners dilemma on a network. Every node of the network corresponds on an individual choosing whether to cooperate or defect in a repeated game. The players revise their actions by imitating those neighbors who have higher payoffs. We show that when the interactions take place on graphs with large girth, cooperation is more likely to emerge. On the flip side, in graphs with many cycles of length 3 and 4, defection spreads more rapidly. One of the key ideas of our analysis is that our dynamics can be seen as a perturbation of the voter model. We write the transition kernel of the corresponding Markov chain in terms of the pairwise correlations in the voter model. We analyze the pairwise correlation and show that in graphs with relatively large girth, cooperators cluster and help each other.
Real-world complex networks describe connections between objects; in reality, those objects are often endowed with some kind of features. How does the presence or absence of such features interplay with the network link structure? Although the situation here described is truly ubiquitous, there is a limited body of research dealing with large graphs of this kind. Many previous works considered homophily as the only possible transmission mechanism translating node features into links. Other authors, instead, developed more sophisticated models, that are able to handle complex feature interactions, but are unfit to scale to very large networks. We expand on the MGJ model, where interactions between pairs of features can foster or discourage link formation. In this work, we will investigate how to estimate the latent feature-feature interactions in this model. We shall propose two solutions: the first one assumes feature independence and it is essentially based on Naive Bayes; the second one, which relaxes the independence assumption assumption, is based on perceptrons. In fact, we show it is possible to cast the model equation in order to see it as the prediction rule of a perceptron. We analyze how classical results for the perceptrons can be interpreted in this context; then, we define a fast and simple perceptron-like algorithm for this task, which can process $10^8$ links in minutes. We then compare these two techniques, first with synthetic datasets that follows our model, gaining evidence that the Naive independence assumptions are detrimental in practice. Secondly, we consider a real, large-scale citation network where each node (i.e., paper) can be described by different types of characteristics; there, our algorithm can assess how well each set of features can explain the links, and thus finding meaningful latent feature-feature interactions.
The shortest-path, commute time, and diffusion distances on undirected graphs have been widely employed in applications such as dimensionality reduction, link prediction, and trip planning. Increasingly, there is interest in using asymmetric structure of data derived from Markov chains and directed graphs, but few metrics are specifically adapted to this task. We introduce a metric on the state space of any ergodic, finite-state, time-homogeneous Markov chain and, in particular, on any Markov chain derived from a directed graph. Our construction is based on hitting probabilities, with nearness in the metric space related to the transfer of random walkers from one node to another at stationarity. Notably, our metric is insensitive to shortest and average walk distances, thus giving new information compared to existing metrics. We use possible degeneracies in the metric to develop an interesting structural theory of directed graphs and explore a related quotienting procedure. Our metric can be computed in $O(n^3)$ time, where $n$ is the number of states, and in examples we scale up to $n=10,000$ nodes and $approx 38M$ edges on a desktop computer. In several examples, we explore the nature of the metric, compare it to alternative methods, and demonstrate its utility for weak recovery of community structure in dense graphs, visualization, structure recovering, dynamics exploration, and multiscale cluster detection.