Do you want to publish a course? Click here

A general purpose algorithm for counting simple cycles and simple paths of any length

60   0   0.0 ( 0 )
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

We describe a general purpose algorithm for counting simple cycles and simple paths of any length $ell$ on a (weighted di)graph on $N$ vertices and $M$ edges, achieving a time complexity of $Oleft(N+M+big(ell^omega+ellDeltabig) |S_ell|right)$. In this expression, $|S_ell|$ is the number of (weakly) connected induced subgraphs of $G$ on at most $ell$ vertices, $Delta$ is the maximum degree of any vertex and $omega$ is the exponent of matrix multiplication. We compare the algorithm complexity both theoretically and experimentally with most of the existing algorithms for the same task. These comparisons show that the algorithm described here is the best general purpose algorithm for the class of graphs where $(ell^{omega-1}Delta^{-1}+1) |S_ell|leq |text{Cycle}_ell|$, with $|text{Cycle}_ell|$ the total number of simple cycles of length at most $ell$, including backtracks and self-loops. On ErdH{o}s-Renyi random graphs, we find empirically that this happens when the edge probability is larger than circa $4/N$. In addition, we show that some real-world networks also belong to this class. Finally, the algorithm permits the enumeration of simple cycles and simple paths on networks where vertices are labeled from an alphabet on $n$ letters with a time complexity of $Oleft(N+M+big(n^ellell^omega+ellDeltabig) |S_ell|right)$. A Matlab implementation of the algorithm proposed here is available for download.



rate research

Read More

A new efficient algorithm is presented for finding all simple cycles that satisfy a length constraint in a directed graph. When the number of vertices is non-trivial, most cycle-finding problems are of practical interest for sparse graphs only. We show that for a class of sparse graphs in which the vertex degrees are almost uniform, our algorithm can find all cycles of length less than or equal to $k$ in $O((c+n)(k-1)d^k)$ steps, where $n$ is the number of vertices, $c$ is the total number of cycles discovered, $d$ is the average degree of the graphs vertices, and $k > 1$. While our analysis for the running time addresses only a class of sparse graphs, we provide empirical and experimental evidence of the efficiency of the algorithm for general sparse graphs. This algorithm is a significant improvement over the only other deterministic algorithm for this problem known to us; it also lends itself to massive parallelism. Experimental results of a serial implementation on some large real-world graphs are presented.
In the subgraph counting problem, we are given a input graph $G(V, E)$ and a target graph $H$; the goal is to estimate the number of occurrences of $H$ in $G$. Our focus here is on designing sublinear-time algorithms for approximately counting occurrences of $H$ in $G$ in the setting where the algorithm is given query access to $G$. This problem has been studied in several recent papers which primarily focused on specific families of graphs $H$ such as triangles, cliques, and stars. However, not much is known about approximate counting of arbitrary graphs $H$. This is in sharp contrast to the closely related subgraph enumeration problem that has received significant attention in the database community as the database join problem. The AGM bound shows that the maximum number of occurrences of any arbitrary subgraph $H$ in a graph $G$ with $m$ edges is $O(m^{rho(H)})$, where $rho(H)$ is the fractional edge-cover of $H$, and enumeration algorithms with matching runtime are known for any $H$. We bridge this gap between subgraph counting and subgraph enumeration by designing a sublinear-time algorithm that can estimate the number of any arbitrary subgraph $H$ in $G$, denoted by $#H$, to within a $(1pm epsilon)$-approximation w.h.p. in $O(frac{m^{rho(H)}}{#H}) cdot poly(log{n},1/epsilon)$ time. Our algorithm is allowed the standard set of queries for general graphs, namely degree queries, pair queries and neighbor queries, plus an additional edge-sample query that returns an edge chosen uniformly at random. The performance of our algorithm matches those of Eden et.al. [FOCS 2015, STOC 2018] for counting triangles and cliques and extend them to all choices of subgraph $H$ under the additional assumption of edge-sample queries. We further show that our algorithm works for the more general database join size estimation problem and prove a matching lower bound for this problem.
186 - Jack Wang 2012
In this article, we devise a concise algorithm for computing BOCP. Our method is simple, easy-to-implement but without loss of efficiency. Given two circular-arc polygons with $m$ and $n$ edges respectively, our method runs in $O(m+n+(l+k)log l)$ time, using $O(m+n+k)$ space, where $k$ is the number of intersections, and $l$ is the number of {edge}s. Our algorithm has the power to approximate to linear complexity when $k$ and $l$ are small. The superiority of the proposed algorithm is also validated through empirical study.
We propose a simple iterative (SI) algorithm for the maxcut problem through fully using an equivalent continuous formulation. It does not need rounding at all and has advantages that all subproblems have explicit analytic solutions, the cut values are monotonically updated and the iteration points converge to a local optima in finite steps via an appropriate subgradient selection. Numerical experiments on G-set demonstrate the performance. In particular, the ratios between the best cut values achieved by SI and the best known ones are at least $0.986$ and can be further improved to at least $0.997$ by a preliminary attempt to break out of local optima.
We show a deterministic algorithm for computing edge connectivity of a simple graph with $m$ edges in $m^{1+o(1)}$ time. Although the fastest deterministic algorithm by Henzinger, Rao, and Wang [SODA17] has a faster running time of $O(mlog^{2}mloglog m)$, we believe that our algorithm is conceptually simpler. The key tool for this simplication is the expander decomposition. We exploit it in a very straightforward way compared to how it has been previously used in the literature.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا