ترغب بنشر مسار تعليمي؟ اضغط هنا

Contracting a Planar Graph Efficiently

42   0   0.0 ( 0 )
 نشر من قبل Adam Karczmarz
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a data structure that can maintain a simple planar graph under edge contractions in linear total time. The data structure supports adjacency queries and provides access to neighbor lists in $O(1)$ time. Moreover, it can report all the arising self-loops and parallel edges. By applying the data structure, we can achieve optimal running times for decremental bridge detection, 2-edge connectivity, maximal 3-edge connected components, and the problem of finding a unique perfect matching for a static planar graph. Furthermore, we improve the running times of algorithms for several planar graph problems, including decremental 2-vertex and 3-edge connectivity, and we show that using our data structure in a black-box manner, one obtains conceptually simple optimal algorithms for computing MST and 5-coloring in planar graphs.



قيم البحث

اقرأ أيضاً

We present a sublinear time algorithm that allows one to sample multiple edges from a distribution that is pointwise $epsilon$-close to the uniform distribution, in an emph{amortized-efficient} fashion. We consider the adjacency list query model, whe re access to a graph $G$ is given via degree and neighbor queries. The problem of sampling a single edge in this model has been raised by Eden and Rosenbaum (SOSA 18). Let $n$ and $m$ denote the number of vertices and edges of $G$, respectively. Eden and Rosenbaum provided upper and lower bounds of $Theta^*(n/sqrt m)$ for sampling a single edge in general graphs (where $O^*(cdot)$ suppresses $textrm{poly}(1/epsilon)$ and $textrm{poly}(log n)$ dependencies). We ask whether the query complexity lower bound for sampling a single edge can be circumvented when multiple samples are required. That is, can we get an improved amortized per-sample cost if we allow a preprocessing phase? We answer in the affirmative. We present an algorithm that, if one knows the number of required samples $q$ in advance, has an overall cost that is sublinear in $q$, namely, $O^*(sqrt q cdot(n/sqrt m))$, which is strictly preferable to $O^*(qcdot (n/sqrt m))$ cost resulting from $q$ invocations of the algorithm by Eden and Rosenbaum. Subsequent to a preliminary version of this work, Tv{e}tek and Thorup (arXiv, preprint) proved that this bound is essentially optimal.
A cornerstone theorem in the Graph Minors series of Robertson and Seymour is the result that every graph $G$ with no minor isomorphic to a fixed graph $H$ has a certain structure. The structure can then be exploited to deduce far-reaching consequence s. The exact statement requires some explanation, but roughly it says that there exist integers $k,n$ depending on $H$ only such that $0<k<n$ and for every $ntimes n$ grid minor $J$ of $G$ the graph $G$ has a a $k$-near embedding in a surface $Sigma$ that does not embed $H$ in such a way that a substantial part of $J$ is embedded in $Sigma$. Here a $k$-near embedding means that after deleting at most $k$ vertices the graph can be drawn in $Sigma$ without crossings, except for local areas of non-planarity, where crossings are permitted, but at most $k$ of these areas are attached to the rest of the graph by four or more vertices and inside those the graph is constrained in a different way, again depending on the parameter $k$. The original and only proof so far is quite long and uses many results developed in the Graph Minors series. We give a proof that uses only our earlier paper [A new proof of the flat wall theorem, {it J.~Combin. Theory Ser. B bf 129} (2018), 158--203] and results from graduate textbooks. Our proof is constructive and yields a polynomial time algorithm to construct such a structure. We also give explicit constants for the structure theorem, whereas the original proof only guarantees the existence of such constants.
The (non-uniform) sparsest cut problem is the following graph-partitioning problem: given a supply graph, and demands on pairs of vertices, delete some subset of supply edges to minimize the ratio of the supply edges cut to the total demand of the pa irs separated by this deletion. Despite much effort, there are only a handful of nontrivial classes of supply graphs for which constant-factor approximations are known. We consider the problem for planar graphs, and give a $(2+varepsilon)$-approximation algorithm that runs in quasipolynomial time. Our approach defines a new structural decomposition of an optimal solution using a patching primitive. We combine this decomposition with a Sherali-Adams-style linear programming relaxation of the problem, which we then round. This should be compared with the polynomial-time approximation algorithm of Rao (1999), which uses the metric linear programming relaxation and $ell_1$-embeddings, and achieves an $O(sqrt{log n})$-approximation in polynomial time.
Finding a minimum vertex cover in a network is a fundamental NP-complete graph problem. One way to deal with its computational hardness, is to trade the qualitative performance of an algorithm (allowing non-optimal outputs) for an improved running ti me. For the vertex cover problem, there is a gap between theory and practice when it comes to understanding this tradeoff. On the one hand, it is known that it is NP-hard to approximate a minimum vertex cover within a factor of $sqrt{2}$. On the other hand, a simple greedy algorithm yields close to optimal approximations in practice. A promising approach towards understanding this discrepancy is to recognize the differences between theoretical worst-case instances and real-world networks. Following this direction, we close the gap between theory and practice by providing an algorithm that efficiently computes nearly optimal vertex cover approximations on hyperbolic random graphs; a network model that closely resembles real-world networks in terms of degree distribution, clustering, and the small-world property. More precisely, our algorithm computes a $(1 + o(1))$-approximation, asymptotically almost surely, and has a running time of $mathcal{O}(m log(n))$. The proposed algorithm is an adaption of the successful greedy approach, enhanced with a procedure that improves on parts of the graph where greedy is not optimal. This makes it possible to introduce a parameter that can be used to tune the tradeoff between approximation performance and running time. Our empirical evaluation on real-world networks shows that this allows for improving over the near-optimal results of the greedy approach.
We study the problem of online graph exploration on undirected graphs, where a searcher has to visit every vertex and return to the origin. Once a new vertex is visited, the searcher learns of all neighboring vertices and the connecting edge weights. The goal such an exploration is to minimize its total cost, where each edge traversal incurs a cost of the corresponding edge weight. We investigate the problem on tadpole graphs (also known as dragons, kites), which consist of a cycle with an attached path. Miyazaki et al. (The online graph exploration problem on restricted graphs, IEICE Transactions 92-D (9), 2009) showed that every online algorithm on these graphs must have a competitive ratio of 2-epsilon, but did not provide upper bounds for non-unit edge weights. We show via amortized analysis that a greedy approach yields a matching competitive ratio of 2 on tadpole graphs, for arbitrary non-negative edge weights.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا