Do you want to publish a course? Click here

Lossy Kernelization

59   0   0.0 ( 0 )
 Added by Fahad Panolan
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions combine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size $alpha$-approximate kernel. Loosely speaking, a polynomial size $alpha$-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance $(I,k)$ to a parameterized problem, and outputs another instance $(I,k)$ to the same problem, such that $|I|+k leq k^{O(1)}$. Additionally, for every $c geq 1$, a $c$-approximate solution $s$ to the pre-processed instance $(I,k)$ can be turned in polynomial time into a $(c cdot alpha)$-approximate solution $s$ to the original instance $(I,k)$. Our main technical contribution are $alpha$-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless $NP subseteq coNP/poly$. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an $alpha$-approximate kernel of polynomial size, for any $alpha geq 1$, unless $NP subseteq coNP/poly$. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximation



rate research

Read More

In this work, we study the $k$-median clustering problem with an additional equal-size constraint on the clusters, from the perspective of parameterized preprocessing. Our main result is the first lossy ($2$-approximate) polynomial kernel for this problem, parameterized by the cost of clustering. We complement this result by establishing lower bounds for the problem that eliminate the existences of an (exact) kernel of polynomial size and a PTAS.
A fundamental theorem of Whitney from 1933 asserts that 2-connected graphs G and H are 2-isomorphic, or equivalently, their cycle matroids are isomorphic, if and only if G can be transformed into H by a series of operations called Whitney switches. In this paper we consider the quantitative question arising from Whitneys theorem: Given two 2-isomorphic graphs, can we transform one into another by applying at most k Whitney switches? This problem is already NP-complete for cycles, and we investigate its parameterized complexity. We show that the problem admits a kernel of size O(k), and thus, is fixed-parameter tractable when parameterized by k.
We study the algorithmic properties of the graph class Chordal-ke, that is, graphs that can be turned into a chordal graph by adding at most k edges or, equivalently, the class of graphs of fill-in at most k. We discover that a number of fundamental intractable optimization problems being parameterized by k admit subexponential algorithms on graphs from Chordal-ke. We identify a large class of optimization problems on Chordal-ke that admit algorithms with the typical running time 2^{O(sqrt{k}log k)}cdot n^{O(1)}. Examples of the problems from this class are finding an independent set of maximum weight, finding a feedback vertex set or an odd cycle transversal of minimum weight, or the problem of finding a maximum induced planar subgraph. On the other hand, we show that for some fundamental optimization problems, like finding an optimal graph coloring or finding a maximum clique, are FPT on Chordal-ke when parameterized by k but do not admit subexponential in k algorithms unless ETH fails. Besides subexponential time algorithms, the class of Chordal-ke graphs appears to be appealing from the perspective of kernelization (with parameter k). While it is possible to show that most of the weighted variants of optimization problems do not admit polynomial in k kernels on Chordal-ke graphs, this does not exclude the existence of Turing kernelization and kernelization for unweighted graphs. In particular, we construct a polynomial Turing kernel for Weighted Clique on Chordal-ke graphs. For (unweighted) Independent Set we design polynomial kernels on two interesting subclasses of Chordal-ke, namely, Interval-ke and Split-ke graphs.
The three-in-a-tree problem asks for an induced tree of the input graph containing three mandatory vertices. In 2006, Chudnovsky and Seymour [Combinatorica, 2010] presented the first polynomial time algorithm for this problem, which has become a critical subroutine in many algorithms for detecting induced subgraphs, such as beetles, pyramids, thetas, and even and odd-holes. In 2007, Derhy and Picouleau [Discrete Applied Mathematics, 2009] considered the natural generalization to $k$ mandatory vertices, proving that, when $k$ is part of the input, the problem is $mathsf{NP}$-complete, and ask what is the complexity of four-in-a-tree. Motivated by this question and the relevance of the original problem, we study the parameterized complexity of $k$-in-a-tree. We begin by showing that the problem is $mathsf{W[1]}$-hard when jointly parameterized by the size of the solution and minimum clique cover and, under the Exponential Time Hypothesis, does not admit an $n^{o(k)}$ time algorithm. Afterwards, we use Courcelles Theorem to prove fixed-parameter tractability under cliquewidth, which prompts our investigation into which parameterizations admit single exponential algorithms; we show that such algorithms exist for the unrelated parameterizations treewidth, distance to cluster, and distance to co-cluster. In terms of kernelization, we present a linear kernel under feedback edge set, and show that no polynomial kernel exists under vertex cover nor distance to clique unless $mathsf{NP} subseteq mathsf{coNP}/mathsf{poly}$. Along with other remarks and previous work, our tractability and kernelization results cover many of the most commonly employed parameters in the graph parameter hierarchy.
134 - Ankur Moitra , Michael Saks 2013
We give a polynomial time algorithm for the lossy population recovery problem. In this problem, the goal is to approximately learn an unknown distribution on binary strings of length $n$ from lossy samples: for some parameter $mu$ each coordinate of the sample is preserved with probability $mu$ and otherwise is replaced by a `?. The running time and number of samples needed for our algorithm is polynomial in $n$ and $1/varepsilon$ for each fixed $mu>0$. This improves on algorithm of Wigderson and Yehudayoff that runs in quasi-polynomial time for any $mu > 0$ and the polynomial time algorithm of Dvir et al which was shown to work for $mu gtrapprox 0.30$ by Batman et al. In fact, our algorithm also works in the more general framework of Batman et al. in which there is no a priori bound on the size of the support of the distribution. The algorithm we analyze is implicit in previous work; our main contribution is to analyze the algorithm by showing (via linear programming duality and connections to complex analysis) that a certain matrix associated with the problem has a robust local inverse even though its condition number is exponentially small. A corollary of our result is the first polynomial time algorithm for learning DNFs in the restriction access model of Dvir et al.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا