ترغب بنشر مسار تعليمي؟ اضغط هنا

A split graph is a graph whose vertex set can be partitioned into a clique and a stable set. Given a graph $G$ and weight function $w: V(G) to mathbb{Q}_{geq 0}$, the Split Vertex Deletion (SVD) problem asks to find a minimum weight set of vertices $ X$ such that $G-X$ is a split graph. It is easy to show that a graph is a split graph if and only it it does not contain a $4$-cycle, $5$-cycle, or a two edge matching as an induced subgraph. Therefore, SVD admits an easy $5$-approximation algorithm. On the other hand, for every $delta >0$, SVD does not admit a $(2-delta)$-approximation algorithm, unless P=NP or the Unique Games Conjecture fails. For every $epsilon >0$, Lokshtanov, Misra, Panolan, Philip, and Saurabh recently gave a randomized $(2+epsilon)$-approximation algorithm for SVD. In this work we give an extremely simple deterministic $(2+epsilon)$-approximation algorithm for SVD.
We give the first $2$-approximation algorithm for the cluster vertex deletion problem. This is tight, since approximating the problem within any constant factor smaller than $2$ is UGC-hard. Our algorithm combines the previous approaches, based on th e local ratio technique and the management of true twins, with a novel construction of a good cost function on the vertices at distance at most $2$ from any vertex of the input graph. As an additional contribution, we also study cluster vertex deletion from the polyhedral perspective, where we prove almost matching upper and lower bounds on how well linear programming relaxations can approximate the problem.
Let $G$ be an $n$-node graph without two disjoint odd cycles. The algorithm of Artmann, Weismantel and Zenklusen (STOC17) for bimodular integer programs can be used to find a maximum weight stable set in $G$ in strongly polynomial time. Building on s tructural results characterizing sufficiently connected graphs without two disjoint odd cycles, we construct a size-$O(n^2)$ extended formulation for the stable set polytope of $G$.
We prove that for every $n$-vertex graph $G$, the extension complexity of the correlation polytope of $G$ is $2^{O(mathrm{tw}(G) + log n)}$, where $mathrm{tw}(G)$ is the treewidth of $G$. Our main result is that this bound is tight for graphs contained in minor-closed classes.
In convex integer programming, various procedures have been developed to strengthen convex relaxations of sets of integer points. On the one hand, there exist several general-purpose methods that strengthen relaxations without specific knowledge of t he set $ S $, such as popular linear programming or semi-definite programming hierarchies. On the other hand, various methods have been designed for obtaining strengthened relaxations for very specific sets that arise in combinatorial optimization. We propose a new efficient method that interpolates between these two approaches. Our procedure strengthens any convex set $ Q subseteq mathbb{R}^n $ containing a set $ S subseteq {0,1}^n $ by exploiting certain additional information about $ S $. Namely, the required extra information will be in the form of a Boolean formula $ phi $ defining the target set $ S $. The aim of this work is to analyze various aspects regarding the strength of our procedure. As one result, interpreting an iterated application of our procedure as a hierarchy, our findings simplify, improve, and extend previous results by Bienstock and Zuckerberg on covering problems.
Let $W_t$ denote the wheel on $t+1$ vertices. We prove that for every integer $t geq 3$ there is a constant $c=c(t)$ such that for every integer $kgeq 1$ and every graph $G$, either $G$ has $k$ vertex-disjoint subgraphs each containing $W_t$ as minor , or there is a subset $X$ of at most $c k log k$ vertices such that $G-X$ has no $W_t$ minor. This is best possible, up to the value of $c$. We conjecture that the result remains true more generally if we replace $W_t$ with any fixed planar graph $H$.
The extension complexity $mathsf{xc}(P)$ of a polytope $P$ is the minimum number of facets of a polytope that affinely projects to $P$. Let $G$ be a bipartite graph with $n$ vertices, $m$ edges, and no isolated vertices. Let $mathsf{STAB}(G)$ be the convex hull of the stable sets of $G$. It is easy to see that $n leqslant mathsf{xc} (mathsf{STAB}(G)) leqslant n+m$. We improve both of these bounds. For the upper bound, we show that $mathsf{xc} (mathsf{STAB}(G))$ is $O(frac{n^2}{log n})$, which is an improvement when $G$ has quadratically many edges. For the lower bound, we prove that $mathsf{xc} (mathsf{STAB}(G))$ is $Omega(n log n)$ when $G$ is the incidence graph of a finite projective plane. We also provide examples of $3$-regular bipartite graphs $G$ such that the edge vs stable set matrix of $G$ has a fooling set of size $|E(G)|$.
Let $S subseteq {0,1}^n$ and $R$ be any polytope contained in $[0,1]^n$ with $R cap {0,1}^n = S$. We prove that $R$ has bounded Chvatal-Gomory rank (CG-rank) provided that $S$ has bounded notch and bounded gap, where the notch is the minimum integer $p$ such that all $p$-dimensional faces of the $0/1$-cube have a nonempty intersection with $S$, and the gap is a measure of the size of the facet coefficients of $mathsf{conv}(S)$. Let $H[bar{S}]$ denote the subgraph of the $n$-cube induced by the vertices not in $S$. We prove that if $H[bar{S}]$ does not contain a subdivision of a large complete graph, then both the notch and the gap are bounded. By our main result, this implies that the CG-rank of $R$ is bounded as a function of the treewidth of $H[bar{S}]$. We also prove that if $S$ has notch $3$, then the CG-rank of $R$ is always bounded. Both results generalize a recent theorem of Cornuejols and Lee, who proved that the CG-rank is bounded by a constant if the treewidth of $H[bar{S}]$ is at most $2$.
Generalized probabilistic theories (GPT) provide a general framework that includes classical and quantum theories. It is described by a cone $C$ and its dual $C^*$. We show that whether some one-way communication complexity problems can be solved wit hin a GPT is equivalent to the recently introduced cone factorisation of the corresponding communication matrix $M$. We also prove an analogue of Holevos theorem: when the cone $C$ is contained in $mathbb{R}^{n}$, the classical capacity of the channel realised by sending GPT states and measuring them is bounded by $log n$. Polytopes and optimising functions over polytopes arise in many areas of discrete mathematics. A conic extension of a polytope is the intersection of a cone $C$ with an affine subspace whose projection onto the original space yields the desired polytope. Extensions of polytopes can sometimes be much simpler geometric objects than the polytope itself. The existence of a conic extension of a polytope is equivalent to that of a cone factorisation of the slack matrix of the polytope, on the same cone. We show that all $0/1$ polytopes whose vertices can be recognized by a polynomial size circuit, which includes as a special case the travelling salesman polytope and many other polytopes from combinatorial optimisation, have small conic extension complexity when the cone is the completely positive cone. Using recent exponential lower bounds on the linear extension complexity of polytopes, this provides an exponential gap between the communication complexity of GPT based on the completely positive cone and classical communication complexity, and a conjectured exponential gap with quantum communication complexity. Our work thus relates the communication complexity of generalisations of quantum theory to questions of mainstream interest in the area of combinatorial optimisation.
We consider the problem of partial order production: arrange the elements of an unknown totally ordered set T into a target partially ordered set S, by comparing a minimum number of pairs in T. Special cases include sorting by comparisons, selection, multiple selection, and heap construction. We give an algorithm performing ITLB + o(ITLB) + O(n) comparisons in the worst case. Here, n denotes the size of the ground sets, and ITLB denotes a natural information-theoretic lower bound on the number of comparisons needed to produce the target partial order. Our approach is to replace the target partial order by a weak order (that is, a partial order with a layered structure) extending it, without increasing the information theoretic lower bound too much. We then solve the problem by applying an efficient multiple selection algorithm. The overall complexity of our algorithm is polynomial. This answers a question of Yao (SIAM J. Comput. 18, 1989). We base our analysis on the entropy of the target partial order, a quantity that can be efficiently computed and provides a good estimate of the information-theoretic lower bound.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا