Do you want to publish a course? Click here

Structural Parameters, Tight Bounds, and Approximation for (k,r)-Center

291   0   0.0 ( 0 )
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

In $(k,r)$-Center we are given a (possibly edge-weighted) graph and are asked to select at most $k$ vertices (centers), so that all other vertices are at distance at most $r$ from a center. In this paper we provide a number of tight fine-grained bounds on the complexity of this problem with respect to various standard graph parameters. Specifically: - For any $rge 1$, we show an algorithm that solves the problem in $O^*((3r+1)^{textrm{cw}})$ time, where $textrm{cw}$ is the clique-width of the input graph, as well as a tight SETH lower bound matching this algorithms performance. As a corollary, for $r=1$, this closes the gap that previously existed on the complexity of Dominating Set parameterized by $textrm{cw}$. - We strengthen previously known FPT lower bounds, by showing that $(k,r)$-Center is W[1]-hard parameterized by the input graphs vertex cover (if edge weights are allowed), or feedback vertex set, even if $k$ is an additional parameter. Our reductions imply tight ETH-based lower bounds. Finally, we devise an algorithm parameterized by vertex cover for unweighted graphs. - We show that the complexity of the problem parameterized by tree-depth is $2^{Theta(textrm{td}^2)}$ by showing an algorithm of this complexity and a tight ETH-based lower bound. We complement these mostly negative results by providing FPT approximation schemes parameterized by clique-width or treewidth which work efficiently independently of the values of $k,r$. In particular, we give algorithms which, for any $epsilon>0$, run in time $O^*((textrm{tw}/epsilon)^{O(textrm{tw})})$, $O^*((textrm{cw}/epsilon)^{O(textrm{cw})})$ and return a $(k,(1+epsilon)r)$-center, if a $(k,r)$-center exists, thus circumventing the problems W-hardness.



rate research

Read More

We study a combinatorial problem called Minimum Maximal Matching, where we are asked to find in a general graph the smallest that can not be extended. We show that this problem is hard to approximate with a constant smaller than 2, assuming the Unique Games Conjecture. As a corollary we show, that Minimum Maximal Matching in bipartite graphs is hard to approximate with constant smaller than $frac{4}{3}$, with the same assumption. With a stronger variant of the Unique Games Conjecture --- that is Small Set Expansion Hypothesis --- we are able to improve the hardness result up to the factor of $frac{3}{2}$.
We develop a new framework for generalizing approximation algorithms from the structural graph algorithm literature so that they apply to graphs somewhat close to that class (a scenario we expect is common when working with real-world networks) while still guaranteeing approximation ratios. The idea is to $textit{edit}$ a given graph via vertex- or edge-deletions to put the graph into an algorithmically tractable class, apply known approximation algorithms for that class, and then $textit{lift}$ the solution to apply to the original graph. We give a general characterization of when an optimization problem is amenable to this approach, and show that it includes many well-studied graph problems, such as Independent Set, Vertex Cover, Feedback Vertex Set, Minimum Maximal Matching, Chromatic Number, ($ell$-)Dominating Set, Edge ($ell$-)Dominating Set, and Connected Dominating Set. To enable this framework, we develop new editing algorithms that find the approximately-fewest edits required to bring a given graph into one of several important graph classes (in some cases, also approximating the target parameter of the family). For bounded degeneracy, we obtain a bicriteria $(4,4)$-approximation which also extends to a smoother bicriteria trade-off. For bounded treewidth, we obtain a bicriteria $(O(log^{1.5} n), O(sqrt{log w}))$-approximation, and for bounded pathwidth, we obtain a bicriteria $(O(log^{1.5} n), O(sqrt{log w} cdot log n))$-approximation. For treedepth $2$ (also related to bounded expansion), we obtain a $4$-approximation. We also prove complementary hardness-of-approximation results assuming $mathrm{P} eq mathrm{NP}$: in particular, these problems are all log-factor inapproximable, except the last which is not approximable below some constant factor ($2$ assuming UGC).
Changs lemma (Duke Mathematical Journal, 2002) is a classical result with applications across several areas in mathematics and computer science. For a Boolean function $f$ that takes values in {-1,1} let $r(f)$ denote its Fourier rank. For each positive threshold $t$, Changs lemma provides a lower bound on $wt(f):=Pr[f(x)=-1]$ in terms of the dimension of the span of its characters with Fourier coefficients of magnitude at least $1/t$. We examine the tightness of Changs lemma w.r.t. the following three natural settings of the threshold: - the Fourier sparsity of $f$, denoted $k(f)$, - the Fourier max-supp-entropy of $f$, denoted $k(f)$, defined to be $max {1/|hat{f}(S)| : hat{f}(S) eq 0}$, - the Fourier max-rank-entropy of $f$, denoted $k(f)$, defined to be the minimum $t$ such that characters whose Fourier coefficients are at least $1/t$ in absolute value span a space of dimension $r(f)$. We prove new lower bounds on $wt(f)$ in terms of these measures. One of our lower bounds subsumes and refines the previously best known upper bound on $r(f)$ in terms of $k(f)$ by Sanyal (ToC, 2019). Another lower bound is based on our improvement of a bound by Chattopadhyay, Hatami, Lovett and Tal (ITCS, 2019) on the sum of the absolute values of the level-$1$ Fourier coefficients. We also show that Changs lemma for the these choices of the threshold is asymptotically outperformed by our bounds for most settings of the parameters involved. Next, we show that our bounds are tight for a wide range of the parameters involved, by constructing functions (which are modifications of the Addressing function) witnessing their tightness. Finally we construct Boolean functions $f$ for which - our lower bounds asymptotically match $wt(f)$, and - for any choice of the threshold $t$, the lower bound obtained from Changs lemma is asymptotically smaller than $wt(f)$.
181 - Joel Friedman 2017
We develop a notion of {em inner rank} as a tool for obtaining lower bounds on the rank of matrix multiplication tensors. We use it to give a short proof that the border rank (and therefore rank) of the tensor associated with $ntimes n$ matrix multiplication over an arbitrary field is at least $2n^2-n+1$. While inner rank does not provide improvements to currently known lower bounds, we argue that this notion merits further study.
In the communication problem $mathbf{UR}$ (universal relation) [KRW95], Alice and Bob respectively receive $x$ and $y$ in ${0,1}^n$ with the promise that $x eq y$. The last player to receive a message must output an index $i$ such that $x_i eq y_i$. We prove that the randomized one-way communication complexity of this problem in the public coin model is exactly $Theta(min{n, log(1/delta)log^2(frac{n}{log(1/delta)})})$ bits for failure probability $delta$. Our lower bound holds even if promised $mathop{support}(y)subset mathop{support}(x)$. As a corollary, we obtain optimal lower bounds for $ell_p$-sampling in strict turnstile streams for $0le p < 2$, as well as for the problem of finding duplicates in a stream. Our lower bounds do not need to use large weights, and hold even if it is promised that $xin{0,1}^n$ at all points in the stream. Our lower bound demonstrates that any algorithm $mathcal{A}$ solving sampling problems in turnstile streams in low memory can be used to encode subsets of $[n]$ of certain sizes into a number of bits below the information theoretic minimum. Our encoder makes adaptive queries to $mathcal{A}$ throughout its execution, but done carefully so as to not violate correctness. This is accomplished by injecting random noise into the encoders interactions with $mathcal{A}$, which is loosely motivated by techniques in differential privacy. Our correctness analysis involves understanding the ability of $mathcal{A}$ to correctly answer adaptive queries which have positive but bounded mutual information with $mathcal{A}$s internal randomness, and may be of independent interest in the newly emerging area of adaptive data analysis with a theoretical computer science lens.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا