Do you want to publish a course? Click here

A sequential algorithm for fast clique percolation

149   0   0.0 ( 0 )
 Added by Jussi Kumpula
 Publication date 2008
  fields Physics
and research's language is English




Ask ChatGPT about the research

In complex network research clique percolation, introduced by Palla et al., is a deterministic community detection method, which allows for overlapping communities and is purely based on local topological properties of a network. Here we present a sequential clique percolation algorithm (SCP) to do fast community detection in weighted and unweighted networks, for cliques of a chosen size. This method is based on sequentially inserting the constituent links to the network and simultaneously keeping track of the emerging community structure. Unlike existing algorithms, the SCP method allows for detecting k-clique communities at multiple weight thresholds in a single run, and can simultaneously produce a dendrogram representation of hierarchical community structure. In sparse weighted networks, the SCP algorithm can also be used for implementing the weighted clique percolation method recently introduced by Farkas et al. The computational time of the SCP algorithm scales linearly with the number of k-cliques in the network. As an example, the method is applied to a product association network, revealing its nested community structure.



rate research

Read More

Maximum likelihood estimation of mixture proportions has a long history, and continues to play an important role in modern statistics, including in development of nonparametric empirical Bayes methods. Maximum likelihood of mixture proportions has traditionally been solved using the expectation maximization (EM) algorithm, but recent work by Koenker & Mizera shows that modern convex optimization techniques -- in particular, interior point methods -- are substantially faster and more accurate than EM. Here, we develop a new solution based on sequential quadratic programming (SQP). It is substantially faster than the interior point method, and just as accurate. Our approach combines several ideas: first, it solves a reformulation of the original problem; second, it uses an SQP approach to make the best use of the expensive gradient and Hessian computations; third, the SQP iterations are implemented using an active set method to exploit the sparse nature of the quadratic subproblems; fourth, it uses accurate low-rank approximations for more efficient gradient and Hessian computations. We illustrate the benefits of our approach in experiments on synthetic data sets as well as a large genetic association data set. In large data sets (n = 1,000,000 observations, m = 1,000 mixture components), our implementation achieves at least 100-fold reduction in runtime compared with a state-of-the-art interior point solver. Our methods are implemented in Julia, and in an R package available on CRAN (see https://CRAN.R-project.org/package=mixsqp).
We present efficient algorithms to generate a bit string in which each bit is set with arbitrary probability. By adopting a hybrid algorithm, i.e., a finite-bit density approximation with correction techniques, we achieve 3.8 times faster random bit generation than the simple algorithm for the 32-bit case and 6.8 times faster for the 64-bit case. Employing the developed algorithm, we apply the multispin coding technique to one-dimensional bond-directed percolation. The simulations are accelerated by up to a factor of 14 compared with an optimized scalar implementation. The random bit string generation algorithm proposed here is applicable to general Monte Carlo methods.
A popular paradigm for 3D point cloud registration is by extracting 3D keypoint correspondences, then estimating the registration function from the correspondences using a robust algorithm. However, many existing 3D keypoint techniques tend to produce large proportions of erroneous correspondences or outliers, which significantly increases the cost of robust estimation. An alternative approach is to directly search for the subset of correspondences that are pairwise consistent, without optimising the registration function. This gives rise to the combinatorial problem of matching with pairwise constraints. In this paper, we propose a very efficient maximum clique algorithm to solve matching with pairwise constraints. Our technique combines tree searching with efficient bounding and pruning based on graph colouring. We demonstrate that, despite the theoretical intractability, many real problem instances can be solved exactly and quickly (seconds to minutes) with our algorithm, which makes our approach an excellent alternative to standard robust techniques for 3D registration.
55 - Yuri Gurevich 2021
To reversify an arbitrary sequential algorithm $A$, we gently instrument $A$ with bookkeeping machinery. The result is a step-for-step reversible algorithm that mimics $A$ step-for-step and stops exactly when $A$ does. Without loss of generality, we presume that algorithm $A$ is presented as an abstract state machine that is behaviorally identical to $A$. The existence of such representation has been proven theoretically, and the practicality of such representation has been amply demonstrated.
We elaborate on a linear time implementation of the Collective Influence (CI) algorithm introduced by Morone, Makse, Nature 524, 65 (2015) to find the minimal set of influencers in a network via optimal percolation. We show that the computational complexity of CI is O(N log N) when removing nodes one-by-one, with N the number of nodes. This is made possible by using an appropriate data structure to process the CI values, and by the finite radius l of the CI sphere. Furthermore, we introduce a simple extension of CI when l is infinite, the CI propagation (CI_P) algorithm, that considers the global optimization of influence via message passing in the whole network and identifies a slightly smaller fraction of influencers than CI. Remarkably, CI_P is able to reproduce the exact analytical optimal percolation threshold obtained by Bau, Wormald, Random Struct. Alg. 21, 397 (2002) for cubic random regular graphs, leaving little improvement left for random graphs. We also introduce the Collective Immunization Belief Propagation algorithm (CI_BP), a belief-propagation (BP) variant of CI based on optimal immunization, which has the same performance as CI_P. However, this small augmented performance of the order of 1-2 % in the low influencers tail comes at the expense of increasing the computational complexity from O(N log N) to O(N^2 log N), rendering both, CI_P and CI_BP, prohibitive for finding influencers in modern-day big-data. The same nonlinear running time drawback pertains to a recently introduced BP-decimation (BPD) algorithm by Mugisha, Zhou, arXiv:1603.05781. For instance, we show that for big-data social networks of typically 200 million users (eg, active Twitter users sending 500 million tweets per day), CI finds the influencers in less than 3 hours running on a single CPU, while the BP algorithms (CI_P, CI_BP and BDP) would take more than 3,000 years to accomplish the same task.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا