ترغب بنشر مسار تعليمي؟ اضغط هنا

Anti-Path Cover on Sparse Graph Classes

154   0   0.0 ( 0 )
 نشر من قبل EPTCS
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We show that it is possible to use Bondy-Chvatal closure to design an FPT algorithm that decides whether or not it is possible to cover vertices of an input graph by at most k vertex disjoint paths in the complement of the input graph. More precisely, we show that if a graph has tree-width at most w and its complement is closed under Bondy-Chvatal closure, then it is possible to bound neighborhood diversity of the complement by a function of w only. A simpler proof where tree-depth is used instead of tree-width is also presented.



قيم البحث

اقرأ أيضاً

We give a linear-time algorithm that checks for isomorphism between two 0-1 matrices that obey the circular-ones property. This algorithm leads to linear-time isomorphism algorithms for related graph classes, including Helly circular-arc graphs, Gamm a-circular-arc graphs, proper circular-arc graphs and convex-round graphs.
70 - Gabriel Bathie 2021
In a (parameterized) graph edge modification problem, we are given a graph $G$, an integer $k$ and a (usually well-structured) class of graphs $mathcal{G}$, and ask whether it is possible to transform $G$ into a graph $G in mathcal{G}$ by adding and/ or removing at most $k$ edges. Parameterized graph edge modification problems received considerable attention in the last decades. In this paper, we focus on finding small kernels for edge modification problems. One of the most studied problems is the Cluster Editing problem, in which the goal is to partition the vertex set into a disjoint union of cliques. Even if this problem admits a $2k$ kernel [Cao, 2012], this kernel does not reduce the size of most instances. Therefore, we explore the question of whether linear kernels are a theoretical limit in edge modification problems, in particular when the target graphs are very structured (such as a partition into cliques for instance). We prove, as far as we know, the first sublinear kernel for an edge modification problem. Namely, we show that Clique + Independent Set Deletion, which is a restriction of Cluster Deletion, admits a kernel of size $O(k/log k)$. We also obtain small kernels for several other edge modification problems. We prove that Split Addition (and the equivalent Split Deletion) admits a linear kernel, improving the existing quadratic kernel of Ghosh et al. [Ghosh et al., 2015]. We complement this result by proving that Trivially Perfect Addition admits a quadratic kernel (improving the cubic kernel of Guo [Guo, 2007]), and finally prove that its triangle-free version (Starforest Deletion) admits a linear kernel, which is optimal under ETH.
The Connected Vertex Cover problem, where the goal is to compute a minimum set of vertices in a given graph which forms a vertex cover and induces a connected subgraph, is a fundamental combinatorial problem and has received extensive attention in va rious subdomains of algorithmics. In the area of kernelization, it is known that this problem is unlikely to have efficient preprocessing algorithms, also known as polynomial kernelizations. However, it has been shown in a recent work of Lokshtanov et al. [STOC 2017] that if one considered an appropriate notion of approximate kernelization, then this problem parameterized by the solution size does admit an approximate polynomial kernelization. In fact, Lokhtanov et al. were able to obtain a polynomial size approximate kernelization scheme (PSAKS) for Connected Vertex Cover parameterized by the solution size. A PSAKS is essentially a preprocessing algorithm whose error can be made arbitrarily close to 0. In this paper we revisit this problem, and consider parameters that are strictly smaller than the size of the solution and obtain the first polynomial size approximate kernelization schemes for the Connected Vertex Cover problem when parameterized by the deletion distance of the input graph to the class of cographs, the class of bounded treewidth graphs, and the class of all chordal graphs.
We study the generalized min sum set cover (GMSSC) problem, wherein given a collection of hyperedges $E$ with arbitrary covering requirements $k_e$, the goal is to find an ordering of the vertices to minimize the total cover time of the hyperedges; a hyperedge $e$ is considered covered by the first time when $k_e$ many of its vertices appear in the ordering. We give a $4.642$ approximation algorithm for GMSSC, coming close to the best possible bound of $4$, already for the classical special case (with all $k_e=1$) of min sum set cover (MSSC) studied by Feige, Lov{a}sz and Tetali, and improving upon the previous best known bound of $12.4$ due to Im, Sviridenko and van der Zwaan. Our algorithm is based on transforming the LP solution by a suitable kernel and applying randomized rounding. This also gives an LP-based $4$ approximation for MSSC. As part of the analysis of our algorithm, we also derive an inequality on the lower tail of a sum of independent Bernoulli random variables, which might be of independent interest and broader utility. Another well-known special case is the min sum vertex cover (MSVC) problem, in which the input hypergraph is a graph and $k_e = 1$, for every edge. We give a $16/9$ approximation for MSVC, and show a matching integrality gap for the natural LP relaxation. This improves upon the previous best $1.999946$ approximation of Barenholz, Feige and Peleg. (The claimed $1.79$ approximation result of Iwata, Tetali and Tripathi for the MSVC turned out have an unfortunate, seemingly unfixable, mistake in it.) Finally, we revisit MSSC and consider the $ell_p$ norm of cover-time of the hyperedges. Using a dual fitting argument, we show that the natural greedy algorithm achieves tight, up to NP-hardness, approximation guarantees of $(p+1)^{1+1/p}$, for all $pge 1$. For $p=1$, this gives yet another proof of the $4$ approximation for MSSC.
We study the classic set cover problem from the perspective of sub-linear algorithms. Given access to a collection of $m$ sets over $n$ elements in the query model, we show that sub-linear algorithms derived from existing techniques have almost tight query complexities. On one hand, first we show an adaptation of the streaming algorithm presented in Har-Peled et al. [2016] to the sub-linear query model, that returns an $alpha$-approximate cover using $tilde{O}(m(n/k)^{1/(alpha-1)} + nk)$ queries to the input, where $k$ denotes the value of a minimum set cover. We then complement this upper bound by proving that for lower values of $k$, the required number of queries is $tilde{Omega}(m(n/k)^{1/(2alpha)})$, even for estimating the optimal cover size. Moreover, we prove that even checking whether a given collection of sets covers all the elements would require $Omega(nk)$ queries. These two lower bounds provide strong evidence that the upper bound is almost tight for certain values of the parameter $k$. On the other hand, we show that this bound is not optimal for larger values of the parameter $k$, as there exists a $(1+varepsilon)$-approximation algorithm with $tilde{O}(mn/kvarepsilon^2)$ queries. We show that this bound is essentially tight for sufficiently small constant $varepsilon$, by establishing a lower bound of $tilde{Omega}(mn/k)$ query complexity.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا