ترغب بنشر مسار تعليمي؟ اضغط هنا

An efficient strongly connected components algorithm in the fault tolerant model

54   0   0.0 ( 0 )
 نشر من قبل Keerti Choudhary
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper we study the problem of maintaining the strongly connected components of a graph in the presence of failures. In particular, we show that given a directed graph $G=(V,E)$ with $n=|V|$ and $m=|E|$, and an integer value $kgeq 1$, there is an algorithm that computes in $O(2^{k}nlog^2 n)$ time for any set $F$ of size at most $k$ the strongly connected components of the graph $Gsetminus F$. The running time of our algorithm is almost optimal since the time for outputting the SCCs of $Gsetminus F$ is at least $Omega(n)$. The algorithm uses a data structure that is computed in a preprocessing phase in polynomial time and is of size $O(2^{k} n^2)$. Our result is obtained using a new observation on the relation between strongly connected components (SCCs) and reachability. More specifically, one of the main building blocks in our result is a restricted variant of the problem in which we only compute strongly connected components that intersect a certain path. Restricting our attention to a path allows us to implicitly compute reachability between the path vertices and the rest of the graph in time that depends logarithmically rather than linearly in the size of the path. This new observation alone, however, is not enough, since we need to find an efficient way to represent the strongly connected components using paths. For this purpose we use a mixture of old and classical techniques such as the heavy path decomposition of Sleator and Tarjan and the classical Depth-First-Search algorithm. Although, these are by now standard techniques, we are not aware of any usage of them in the context of dynamic maintenance of SCCs. Therefore, we expect that our new insights and mixture of new and old techniques will be of independent interest.



قيم البحث

اقرأ أيضاً

It was recently shown that a version of the greedy algorithm gives a construction of fault-tolerant spanners that is size-optimal, at least for vertex faults. However, the algorithm to construct this spanner is not polynomial-time, and the best-known polynomial time algorithm is significantly suboptimal. Designing a polynomial-time algorithm to construct (near-)optimal fault-tolerant spanners was given as an explicit open problem in the two most recent papers on fault-tolerant spanners ([Bodwin, Dinitz, Parter, Vassilevka Williams SODA 18] and [Bodwin, Patel PODC 19]). We give a surprisingly simple algorithm which runs in polynomial time and constructs fault-tolerant spanners that are extremely close to optimal (off by only a linear factor in the stretch) by modifying the greedy algorithm to run in polynomial time. To complement this result, we also give simple distributed constructions in both the LOCAL and CONGEST models.
In this work, we initiate the study of fault tolerant Max Cut, where given an edge-weighted undirected graph $G=(V,E)$, the goal is to find a cut $Ssubseteq V$ that maximizes the total weight of edges that cross $S$ even after an adversary removes $k $ vertices from $G$. We consider two types of adversaries: an adaptive adversary that sees the outcome of the random coin tosses used by the algorithm, and an oblivious adversary that does not. For any constant number of failures $k$ we present an approximation of $(0.878-epsilon)$ against an adaptive adversary and of $alpha_{GW}approx 0.8786$ against an oblivious adversary (here $alpha_{GW}$ is the approximation achieved by the random hyperplane algorithm of [Goemans-Williamson J. ACM `95]). Additionally, we present a hardness of approximation of $alpha_{GW}$ against both types of adversaries, rendering our results (virtually) tight. The non-linear nature of the fault tolerant objective makes the design and analysis of algorithms harder when compared to the classic Max Cut. Hence, we employ approaches ranging from multi-objective optimization to LP duality and the ellipsoid algorithm to obtain our results.
A $k$-spanner of a graph $G$ is a sparse subgraph that preserves its shortest path distances up to a multiplicative stretch factor of $k$, and a $k$-emulator is similar but not required to be a subgraph of $G$. A classic theorem by Thorup and Zwick [ JACM 05] shows that, despite the extra flexibility available to emulators, the size/stretch tradeoffs for spanners and emulators are equivalent. Our main result is that this equivalence in tradeoffs no longer holds in the commonly-studied setting of graphs with vertex failures. That is: we introduce a natural definition of vertex fault-tolerant emulators, and then we show a three-way tradeoff between size, stretch, and fault-tolerance for these emulators that polynomially surpasses the tradeoff known to be optimal for spanners. We complement our emulator upper bound with a lower bound construction that is essentially tight (within $log n$ factors of the upper bound) when the stretch is $2k-1$ and $k$ is either a fixed odd integer or $2$. We also show constructions of fault-tolerant emulators with additive error, demonstrating that these also enjoy significantly improved tradeoffs over those available for fault-tolerant additive spanners.
We optimize the area and latency of Shors factoring while simultaneously improving fault tolerance through: (1) balancing the use of ancilla generators, (2) aggressive optimization of error correction, and (3) tuning the core adder circuits. Our cust om CAD flow produces detailed layouts of the physical components and utilizes simulation to analyze circuits in terms of area, latency, and success probability. We introduce a metric, called ADCR, which is the probabilistic equivalent of the classic Area-Delay product. Our error correction optimization can reduce ADCR by an order of magnitude or more. Contrary to conventional wisdom, we show that the area of an optimized quantum circuit is not dominated exclusively by error correction. Further, our adder evaluation shows that quantum carry-lookahead adders (QCLA) beat ripple-carry adders in ADCR, despite being larger and more complex. We conclude with what we believe is one of most accurate estimates of the area and latency required for 1024-bit Shors factorization: 7659 mm$^{2}$ for the smallest circuit and $6 * 10^8$ seconds for the fastest circuit.
Recent work has established that, for every positive integer $k$, every $n$-node graph has a $(2k-1)$-spanner on $O(f^{1-1/k} n^{1+1/k})$ edges that is resilient to $f$ edge or vertex faults. For vertex faults, this bound is tight. However, the case of edge faults is not as well understood: the best known lower bound for general $k$ is $Omega(f^{frac12 - frac{1}{2k}} n^{1+1/k} +fn)$. Our main result is to nearly close this gap with an improved upper bound, thus separating the cases of edge and vertex faults. For odd $k$, our new upper bound is $O_k(f^{frac12 - frac{1}{2k}} n^{1+1/k} + fn)$, which is tight up to hidden $poly(k)$ factors. For even $k$, our new upper bound is $O_k(f^{1/2} n^{1+1/k} +fn)$, which leaves a gap of $poly(k) f^{1/(2k)}$. Our proof is an analysis of the fault-tolerant greedy algorithm, which requires exponential time, but we also show that there is a polynomial-time algorithm which creates edge fault tolerant spanners that are larger only by factors of $k$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا