ترغب بنشر مسار تعليمي؟ اضغط هنا

Polynomial Kernels for Paw-free Edge Modification Problems

341   0   0.0 ( 0 )
 نشر من قبل Yixin Cao
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Let $H$ be a fixed graph. Given a graph $G$ and an integer $k$, the $H$-free edge modification problem asks whether it is possible to modify at most $k$ edges in $G$ to make it $H$-free. Sandeep and Sivadasan (IPEC 2015) asks whether the paw-free completion problem and the paw-free edge deletion problem admit polynomial kernels. We answer both questions affirmatively by presenting, respectively, $O(k)$-vertex and $O(k^4)$-vertex kernels for them. This is part of an ongoing program that aims at understanding compressibility of $H$-free edge modification problems.

قيم البحث

اقرأ أيضاً

228 - Yixin Cao , Yuping Ke 2021
In an edge modification problem, we are asked to modify at most $k$ edges to a given graph to make the graph satisfy a certain property. Depending on the operations allowed, we have the completion problems and the edge deletion problems. A great amou nt of efforts have been devoted to understanding the kernelization complexity of these problems. We revisit several well-studied edge modification problems, and develop improved kernels for them: begin{itemize} item a $2 k$-vertex kernel for the cluster edge deletion problem, item a $3 k^2$-vertex kernel for the trivially perfect completion problem, item a $5 k^{1.5}$-vertex kernel for the split completion problem and the split edge deletion problem, and item a $5 k^{1.5}$-vertex kernel for the pseudo-split completion problem and the pseudo-split edge deletion problem. end{itemize} Moreover, our kernels for split completion and pseudo-split completion have only $O(k^{2.5})$ edges. Our results also include a $2 k$-vertex kernel for the strong triadic closure problem, which is related to cluster edge deletion.
70 - Gabriel Bathie 2021
In a (parameterized) graph edge modification problem, we are given a graph $G$, an integer $k$ and a (usually well-structured) class of graphs $mathcal{G}$, and ask whether it is possible to transform $G$ into a graph $G in mathcal{G}$ by adding and/ or removing at most $k$ edges. Parameterized graph edge modification problems received considerable attention in the last decades. In this paper, we focus on finding small kernels for edge modification problems. One of the most studied problems is the Cluster Editing problem, in which the goal is to partition the vertex set into a disjoint union of cliques. Even if this problem admits a $2k$ kernel [Cao, 2012], this kernel does not reduce the size of most instances. Therefore, we explore the question of whether linear kernels are a theoretical limit in edge modification problems, in particular when the target graphs are very structured (such as a partition into cliques for instance). We prove, as far as we know, the first sublinear kernel for an edge modification problem. Namely, we show that Clique + Independent Set Deletion, which is a restriction of Cluster Deletion, admits a kernel of size $O(k/log k)$. We also obtain small kernels for several other edge modification problems. We prove that Split Addition (and the equivalent Split Deletion) admits a linear kernel, improving the existing quadratic kernel of Ghosh et al. [Ghosh et al., 2015]. We complement this result by proving that Trivially Perfect Addition admits a quadratic kernel (improving the cubic kernel of Guo [Guo, 2007]), and finally prove that its triangle-free version (Starforest Deletion) admits a linear kernel, which is optimal under ETH.
Kernel methods are fundamental in machine learning, and faster algorithms for kernel approximation provide direct speedups for many core tasks in machine learning. The polynomial kernel is especially important as other kernels can often be approximat ed by the polynomial kernel via a Taylor series expansion. Recent techniques in oblivious sketching reduce the dependence in the running time on the degree $q$ of the polynomial kernel from exponential to polynomial, which is useful for the Gaussian kernel, for which $q$ can be chosen to be polylogarithmic. However, for more slowly growing kernels, such as the neural tangent and arc-cosine kernels, $q$ needs to be polynomial, and previous work incurs a polynomial factor slowdown in the running time. We give a new oblivious sketch which greatly improves upon this running time, by removing the dependence on $q$ in the leading order term. Combined with a novel sampling scheme, we give the fastest algorithms for approximating a large family of slow-growing kernels.
We study the existence of polynomial kernels, for parameterized problems without a polynomial kernel on general graphs, when restricted to graphs of bounded twin-width. Our main result is that a polynomial kernel for $k$-Dominating Set on graphs of t win-width at most 4 would contradict a standard complexity-theoretic assumption. The reduction is quite involved, especially to get the twin-width upper bound down to 4, and can be tweaked to work for Connected $k$-Dominating Set and Total $k$-Dominating Set (albeit with a worse upper bound on the twin-width). The $k$-Independent Set problem admits the same lower bound by a much simpler argument, previously observed [ICALP 21], which extends to $k$-Independent Dominating Set, $k$-Path, $k$-Induced Path, $k$-Induced Matching, etc. On the positive side, we obtain a simple quadratic vertex kernel for Connected $k$-Vertex Cover and Capacitated $k$-Vertex Cover on graphs of bounded twin-width. Interestingly the kernel applies to graphs of Vapnik-Chervonenkis density 1, and does not require a witness sequence. We also present a more intricate $O(k^{1.5})$ vertex kernel for Connected $k$-Vertex Cover. Finally we show that deciding if a graph has twin-width at most 1 can be done in polynomial time, and observe that most optimization/decision graph problems can be solved in polynomial time on graphs of twin-width at most 1.
Kernel methods are fundamental tools in machine learning that allow detection of non-linear dependencies between data without explicitly constructing feature vectors in high dimensional spaces. A major disadvantage of kernel methods is their poor sca lability: primitives such as kernel PCA or kernel ridge regression generally take prohibitively large quadratic space and (at least) quadratic time, as kernel matrices are usually dense. Some methods for speeding up kernel linear algebra are known, but they all invariably take time exponential in either the dimension of the input point set (e.g., fast multipole methods suffer from the curse of dimensionality) or in the degree of the kernel function. Oblivious sketching has emerged as a powerful approach to speeding up numerical linear algebra over the past decade, but our understanding of oblivious sketching solutions for kernel matrices has remained quite limited, suffering from the aforementioned exponential dependence on input parameters. Our main contribution is a general method for applying sketching solutions developed in numerical linear algebra over the past decade to a tensoring of data points without forming the tensoring explicitly. This leads to the first oblivious sketch for the polynomial kernel with a target dimension that is only polynomially dependent on the degree of the kernel function, as well as the first oblivious sketch for the Gaussian kernel on bounded datasets that does not suffer from an exponential dependence on the dimensionality of input data points.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا