Lossy Kernelization


Abstract in English

In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions combine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size $alpha$-approximate kernel. Loosely speaking, a polynomial size $alpha$-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance $(I,k)$ to a parameterized problem, and outputs another instance $(I,k)$ to the same problem, such that $|I|+k leq k^{O(1)}$. Additionally, for every $c geq 1$, a $c$-approximate solution $s$ to the pre-processed instance $(I,k)$ can be turned in polynomial time into a $(c cdot alpha)$-approximate solution $s$ to the original instance $(I,k)$. Our main technical contribution are $alpha$-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless $NP subseteq coNP/poly$. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an $alpha$-approximate kernel of polynomial size, for any $alpha geq 1$, unless $NP subseteq coNP/poly$. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximation

Download