Do you want to publish a course? Click here

Optimal-size problem kernels for $d$-Hitting Set in linear time and space

92   0   0.0 ( 0 )
 Added by Ren\\'e van Bevern
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The known linear-time kernelizations for $d$-Hitting Set guarantee linear worst-case running times using a quadratic-size data structure (that is not fully initialized). Getting rid of this data structure, we show that problem kernels of asymptotically optimal size $O(k^d)$ for $d$-Hitting Set are computable in linear time and space. Additionally, we experimentally compare the linear-time kernelizations for $d$-Hitting Set to each other and to a classical data reduction algorithm due to Weihe.



rate research

Read More

We study the classic set cover problem from the perspective of sub-linear algorithms. Given access to a collection of $m$ sets over $n$ elements in the query model, we show that sub-linear algorithms derived from existing techniques have almost tight query complexities. On one hand, first we show an adaptation of the streaming algorithm presented in Har-Peled et al. [2016] to the sub-linear query model, that returns an $alpha$-approximate cover using $tilde{O}(m(n/k)^{1/(alpha-1)} + nk)$ queries to the input, where $k$ denotes the value of a minimum set cover. We then complement this upper bound by proving that for lower values of $k$, the required number of queries is $tilde{Omega}(m(n/k)^{1/(2alpha)})$, even for estimating the optimal cover size. Moreover, we prove that even checking whether a given collection of sets covers all the elements would require $Omega(nk)$ queries. These two lower bounds provide strong evidence that the upper bound is almost tight for certain values of the parameter $k$. On the other hand, we show that this bound is not optimal for larger values of the parameter $k$, as there exists a $(1+varepsilon)$-approximation algorithm with $tilde{O}(mn/kvarepsilon^2)$ queries. We show that this bound is essentially tight for sufficiently small constant $varepsilon$, by establishing a lower bound of $tilde{Omega}(mn/k)$ query complexity.
70 - Gabriel Bathie 2021
In a (parameterized) graph edge modification problem, we are given a graph $G$, an integer $k$ and a (usually well-structured) class of graphs $mathcal{G}$, and ask whether it is possible to transform $G$ into a graph $G in mathcal{G}$ by adding and/or removing at most $k$ edges. Parameterized graph edge modification problems received considerable attention in the last decades. In this paper, we focus on finding small kernels for edge modification problems. One of the most studied problems is the Cluster Editing problem, in which the goal is to partition the vertex set into a disjoint union of cliques. Even if this problem admits a $2k$ kernel [Cao, 2012], this kernel does not reduce the size of most instances. Therefore, we explore the question of whether linear kernels are a theoretical limit in edge modification problems, in particular when the target graphs are very structured (such as a partition into cliques for instance). We prove, as far as we know, the first sublinear kernel for an edge modification problem. Namely, we show that Clique + Independent Set Deletion, which is a restriction of Cluster Deletion, admits a kernel of size $O(k/log k)$. We also obtain small kernels for several other edge modification problems. We prove that Split Addition (and the equivalent Split Deletion) admits a linear kernel, improving the existing quadratic kernel of Ghosh et al. [Ghosh et al., 2015]. We complement this result by proving that Trivially Perfect Addition admits a quadratic kernel (improving the cubic kernel of Guo [Guo, 2007]), and finally prove that its triangle-free version (Starforest Deletion) admits a linear kernel, which is optimal under ETH.
The NP-hard Multiple Hitting Set problem is finding a minimum-cardinality set intersecting each of the sets in a given input collection a given number of times. Generalizing a well-known data reduction algorithm due to Weihe, we show a problem kernel for Multiple Hitting Set parameterized by the Dilworth number, a graph parameter introduced by Foldes and Hammer in 1978 yet seemingly so far unexplored in the context of parameterized complexity theory. Using matrix multiplication, we speed up the algorithm to quadratic sequential time and logarithmic parallel time. We experimentally evaluate our algorithms. By implementing our algorithm on GPUs, we show the feasability of realizing kernelization algorithms on SIMD (Single Instruction, Multiple Date) architectures.
In this work, we study longest common substring, pattern matching, and wildcard pattern matching in the asymmetric streaming model. In this streaming model, we have random access to one string and streaming access to the other one. We present streaming algorithms with provable guarantees for these three fundamental problems. In particular, our algorithms for pattern matching improve the upper bound and beat the unconditional lower bounds on the memory of randomized and deterministic streaming algorithms. In addition to this, we present algorithms for wildcard pattern matching in the asymmetric streaming model that have optimal space and time.
The Matrix Spencer Conjecture asks whether given $n$ symmetric matrices in $mathbb{R}^{n times n}$ with eigenvalues in $[-1,1]$ one can always find signs so that their signed sum has singular values bounded by $O(sqrt{n})$. The standard approach in discrepancy requires proving that the convex body of all good fractional signings is large enough. However, this question has remained wide open due to the lack of tools to certify measure lower bounds for rather small non-polyhedral convex sets. A seminal result by Batson, Spielman and Srivastava from 2008 shows that any undirected graph admits a linear size spectral sparsifier. Again, one can define a convex body of all good fractional signings. We can indeed prove that this body is close to most of the Gaussian measure. This implies that a discrepancy algorithm by the second author can be used to sample a linear size sparsifer. In contrast to previous methods, we require only a logarithmic number of sampling phases.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا