ترغب بنشر مسار تعليمي؟ اضغط هنا

Point Line Cover: The Easy Kernel is Essentially Tight

37   0   0.0 ( 0 )
 نشر من قبل Geevarghese Philip
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The input to the NP-hard Point Line Cover problem (PLC) consists of a set $P$ of $n$ points on the plane and a positive integer $k$, and the question is whether there exists a set of at most $k$ lines which pass through all points in $P$. A simple polynomial-time reduction reduces any input to one with at most $k^2$ points. We show that this is essentially tight under standard assumptions. More precisely, unless the polynomial hierarchy collapses to its third level, there is no polynomial-time algorithm that reduces every instance $(P,k)$ of PLC to an equivalent instance with $O(k^{2-epsilon})$ points, for any $epsilon>0$. This answers, in the negative, an open problem posed by Lokshtanov (PhD Thesis, 2009). Our proof uses the machinery for deriving lower bounds on the size of kernels developed by Dell and van Melkebeek (STOC 2010). It has two main ingredients: We first show, by reduction from Vertex Cover, that PLC---conditionally---has no kernel of total size $O(k^{2-epsilon})$ bits. This does not directly imply the claimed lower bound on the number of points, since the best known polynomial-time encoding of a PLC instance with $n$ points requires $omega(n^{2})$ bits. To get around this we build on work of Goodman et al. (STOC 1989) and devise an oracle communication protocol of cost $O(nlog n)$ for PLC; its main building block is a bound of $O(n^{O(n)})$ for the order types of $n$ points that are not necessarily in general position, and an explicit algorithm that enumerates all possible order types of n points. This protocol and the lower bound on total size together yield the stated lower bound on the number of points. While a number of essentially tight polynomial lower bounds on total sizes of kernels are known, our result is---to the best of our knowledge---the first to show a nontrivial lower bound for structural/secondary parameters.

قيم البحث

اقرأ أيضاً

In most online problems with delay, clairvoyance (i.e. knowing the future delay of a request upon its arrival) is required for polylogarithmic competitiveness. In this paper, we show that this is not the case for set cover with delay (SCD) -- specifi cally, we present the first non-clairvoyant algorithm, which is $O(log n log m)$-competitive, where $n$ is the number of elements and $m$ is the number of sets. This matches the best known result for the classic online set cover (a special case of non-clairvoyant SCD). Moreover, clairvoyance does not allow for significant improvement - we present lower bounds of $Omega(sqrt{log n})$ and $Omega(sqrt{log m})$ for SCD which apply for the clairvoyant case. In addition, the competitiveness of our algorithm does not depend on the number of requests. Such a guarantee on the size of the universe alone was not previously known even for the clairvoyant case - the only previously-known algorithm (due to Carrasco et al.) is clairvoyant, with competitiveness that grows with the number of requests. For the special case of vertex cover with delay, we show a simpler, deterministic algorithm which is $3$-competitive (and also non-clairvoyant).
Is matching in NC, i.e., is there a deterministic fast parallel algorithm for it? This has been an outstanding open question in TCS for over three decades, ever since the discovery of randomized NC matching algorithms [KUW85, MVV87]. Over the last fi ve years, the theoretical computer science community has launched a relentless attack on this question, leading to the discovery of several powerful ideas. We give what appears to be the culmination of this line of work: An NC algorithm for finding a minimum-weight perfect matching in a general graph with polynomially bounded edge weights, provided it is given an oracle for the decision problem. Consequently, for settling the main open problem, it suffices to obtain an NC algorithm for the decision problem. We believe this new fact has qualitatively changed the nature of this open problem. All known efficient matching algorithms for general graphs follow one of two approaches: given by Edmonds [Edm65] and Lovasz [Lov79]. Our oracle-based algorithm follows a new approach and uses many of the ideas discovered in the last five years. The difficulty of obtaining an NC perfect matching algorithm led researchers to study matching vis-a-vis clever relaxations of the class NC. In this vein, recently Goldwasser and Grossman [GG15] gave a pseudo-deterministic RNC algorithm for finding a perfect matching in a bipartite graph, i.e., an RNC algorithm with the additional requirement that on the same graph, it should return the same (i.e., unique) perfect matching for almost all choices of random bits. A corollary of our reduction is an analogous algorithm for general graphs.
Several algorithms with an approximation guarantee of $O(log n)$ are known for the Set Cover problem, where $n$ is the number of elements. We study a generalization of the Set Cover problem, called the Partition Set Cover problem. Here, the elements are partitioned into $r$ emph{color classes}, and we are required to cover at least $k_t$ elements from each color class $mathcal{C}_t$, using the minimum number of sets. We give a randomized LP-rounding algorithm that is an $O(beta + log r)$ approximation for the Partition Set Cover problem. Here $beta$ denotes the approximation guarantee for a related Set Cover instance obtained by rounding the standard LP. As a corollary, we obtain improved approximation guarantees for various set systems for which $beta$ is known to be sublogarithmic in $n$. We also extend the LP rounding algorithm to obtain $O(log r)$ approximations for similar generalizations of the Facility Location type problems. Finally, we show that many of these results are essentially tight, by showing that it is NP-hard to obtain an $o(log r)$-approximation for any of these problems.
252 - Amin Saberi , David Wajc 2021
Nearly three decades ago, Bar-Noy, Motwani and Naor showed that no online edge-coloring algorithm can edge color a graph optimally. Indeed, their work, titled the greedy algorithm is optimal for on-line edge coloring, shows that the competitive ratio of $2$ of the naive greedy algorithm is best possible online. However, their lower bound required bounded-degree graphs, of maximum degree $Delta = O(log n)$, which prompted them to conjecture that better bounds are possible for higher-degree graphs. While progress has been made towards resolving this conjecture for restricted inputs and arrivals or for random arrival orders, an answer for fully general emph{adversarial} arrivals remained elusive. We resolve this thirty-year-old conjecture in the affirmative, presenting a $(1.9+o(1))$-competitive online edge coloring algorithm for general graphs of degree $Delta = omega(log n)$ under vertex arrivals. At the core of our results, and of possible independent interest, is a new online algorithm which rounds a fractional bipartite matching $x$ online under vertex arrivals, guaranteeing that each edge $e$ is matched with probability $(1/2+c)cdot x_e$, for a constant $c>0.027$.
Reconfiguration schedules, i.e., sequences that gradually transform one solution of a problem to another while always maintaining feasibility, have been extensively studied. Most research has dealt with the decision problem of whether a reconfigurati on schedule exists, and the complexity of finding one. A prime example is the reconfiguration of vertex covers. We initiate the study of batched vertex cover reconfiguration, which allows to reconfigure multiple vertices concurrently while requiring that any adversarial reconfiguration order within a batch maintains feasibility. The latter provides robustness, e.g., if the simultaneous reconfiguration of a batch cannot be guaranteed. The quality of a schedule is measured by the number of batches until all nodes are reconfigured, and its cost, i.e., the maximum size of an intermediate vertex cover. To set a baseline for batch reconfiguration, we show that for graphs belonging to one of the classes ${mathsf{cycles, trees, forests, chordal, cactus, eventext{-}holetext{-}free, clawtext{-}free}}$, there are schedules that use $O(varepsilon^{-1})$ batches and incur only a $1+varepsilon$ multiplicative increase in cost over the best sequential schedules. Our main contribution is to compute such batch schedules in $O(varepsilon^{-1}log^* n)$ distributed time, which we also show to be tight. Further, we show that once we step out of these graph classes we face a very different situation. There are graph classes on which no efficient distributed algorithm can obtain the best (or almost best) existing schedule. Moreover, there are classes of bounded degree graphs which do not admit any reconfiguration schedules without incurring a large multiplicative increase in the cost at all.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا