Do you want to publish a course? Click here

Deterministic Rounding of Dynamic Fractional Matchings

56   0   0.0 ( 0 )
 Added by Sayan Bhattacharya
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We present a framework for deterministically rounding a dynamic fractional matching. Applying our framework in a black-box manner on top of existing fractional matching algorithms, we derive the following new results: (1) The first deterministic algorithm for maintaining a $(2-delta)$-approximate maximum matching in a fully dynamic bipartite graph, in arbitrarily small polynomial update time. (2) The first deterministic algorithm for maintaining a $(1+delta)$-approximate maximum matching in a decremental bipartite graph, in polylogarithmic update time. (3) The first deterministic algorithm for maintaining a $(2+delta)$-approximate maximum matching in a fully dynamic general graph, in small polylogarithmic (specifically, $O(log^4 n)$) update time. These results are respectively obtained by applying our framework on top of the fractional matching algorithms of Bhattacharya et al. [STOC16], Bernstein et al. [FOCS20], and Bhattacharya and Kulkarni [SODA19]. Prior to our work, there were two known general-purpose rounding schemes for dynamic fractional matchings. Both these schemes, by Arar et al. [ICALP18] and Wajc [STOC20], were randomized. Our rounding scheme works by maintaining a good {em matching-sparsifier} with bounded arboricity, and then applying the algorithm of Peleg and Solomon [SODA16] to maintain a near-optimal matching in this low arboricity graph. To the best of our knowledge, this is the first dynamic matching algorithm that works on general graphs by using an algorithm for low-arboricity graphs as a black-box subroutine. This feature of our rounding scheme might be of independent interest.



rate research

Read More

93 - David Wajc 2019
We present a new dynamic matching sparsification scheme. From this scheme we derive a framework for dynamically rounding fractional matchings against emph{adaptive adversaries}. Plugging in known dynamic fractional matching algorithms into our framework, we obtain numerous randomized dynamic matching algorithms which work against adaptive adversaries (the first such algorithms, as all previous randomized algorithms for this problem assumed an emph{oblivious} adversary). In particular, for any constant $epsilon>0$, our framework yields $(2+epsilon)$-approximate algorithms with constant update time or polylog worst-case update time, as well as $(2-delta)$-approximate algorithms in bipartite graphs with arbitrarily-small polynomial update time, with all these algorithms guarantees holding against adaptive adversaries. All these results achieve emph{polynomially} better update time to approximation tradeoffs than previously known to be achievable against adaptive adversaries.
This paper presents an algorithm for estimating the weight of a maximum weighted matching by augmenting any estimation routine for the size of an unweighted matching. The algorithm is implementable in any streaming model including dynamic graph streams. We also give the first constant estimation for the maximum matching size in a dynamic graph stream for planar graphs (or any graph with bounded arboricity) using $tilde{O}(n^{4/5})$ space which also extends to weighted matching. Using previous results by Kapralov, Khanna, and Sudan (2014) we obtain a $mathrm{polylog}(n)$ approximation for general graphs using $mathrm{polylog}(n)$ space in random order streams, respectively. In addition, we give a space lower bound of $Omega(n^{1-varepsilon})$ for any randomized algorithm estimating the size of a maximum matching up to a $1+O(varepsilon)$ factor for adversarial streams.
We present a general approach to rounding semidefinite programming relaxations obtained by the Sum-of-Squares method (Lasserre hierarchy). Our approach is based on using the connection between these relaxations and the Sum-of-Squares proof system to transform a *combining algorithm* -- an algorithm that maps a distribution over solutions into a (possibly weaker) solution -- into a *rounding algorithm* that maps a solution of the relaxation to a solution of the original problem. Using this approach, we obtain algorithms that yield improved results for natural variants of three well-known problems: 1) We give a quasipolynomial-time algorithm that approximates the maximum of a low degree multivariate polynomial with non-negative coefficients over the Euclidean unit sphere. Beyond being of interest in its own right, this is related to an open question in quantum information theory, and our techniques have already led to improved results in this area (Brand~{a}o and Harrow, STOC 13). 2) We give a polynomial-time algorithm that, given a d dimensional subspace of R^n that (almost) contains the characteristic function of a set of size n/k, finds a vector $v$ in the subspace satisfying $|v|_4^4 > c(k/d^{1/3}) |v|_2^2$, where $|v|_p = (E_i v_i^p)^{1/p}$. Aside from being a natural relaxation, this is also motivated by a connection to the Small Set Expansion problem shown by Barak et al. (STOC 2012) and our results yield a certain improvement for that problem. 3) We use this notion of L_4 vs. L_2 sparsity to obtain a polynomial-time algorithm with substantially improved guarantees for recovering a planted $mu$-sparse vector v in a random d-dimensional subspace of R^n. If v has mu n nonzero coordinates, we can recover it with high probability whenever $mu < O(min(1,n/d^2))$, improving for $d < n^{2/3}$ prior methods which intrinsically required $mu < O(1/sqrt(d))$.
We present a deterministic dynamic algorithm for maintaining a $(1+epsilon)f$-approximate minimum cost set cover with $O(flog(Cn)/epsilon^2)$ amortized update time, when the input set system is undergoing element insertions and deletions. Here, $n$ denotes the number of elements, each element appears in at most $f$ sets, and the cost of each set lies in the range $[1/C, 1]$. Our result, together with that of Gupta et al. [STOC`17], implies that there is a deterministic algorithm for this problem with $O(flog(Cn))$ amortized update time and $O(min(log n, f))$-approximation ratio, which nearly matches the polynomial-time hardness of approximation for minimum set cover in the static setting. Our update time is only $O(log (Cn))$ away from a trivial lower bound. Prior to our work, the previous best approximation ratio guaranteed by deterministic algorithms was $O(f^2)$, which was due to Bhattacharya et al. [ICALP`15]. In contrast, the only result that guaranteed $O(f)$-approximation was obtained very recently by Abboud et al. [STOC`19], who designed a dynamic algorithm with $(1+epsilon)f$-approximation ratio and $O(f^2 log n/epsilon)$ amortized update time. Besides the extra $O(f)$ factor in the update time compared to our and Gupta et al.s results, the Abboud et al. algorithm is randomized, and works only when the adversary is oblivious and the sets are unweighted (each set has the same cost). We achieve our result via the primal-dual approach, by maintaining a fractional packing solution as a dual certificate. Unlike previous primal-dual algorithms that try to satisfy some local constraints for individual sets at all time, our algorithm basically waits until the dual solution changes significantly globally, and fixes the solution only where the fix is needed.
This paper gives a new deterministic algorithm for the dynamic Minimum Spanning Forest (MSF) problem in the EREW PRAM model, where the goal is to maintain a MSF of a weighted graph with $n$ vertices and $m$ edges while supporting edge insertions and deletions. We show that one can solve the dynamic MSF problem using $O(sqrt n)$ processors and $O(log n)$ worst-case update time, for a total of $O(sqrt n log n)$ work. This improves on the work of Ferragina [IPPS 1995] which costs $O(log n)$ worst-case update time and $O(n^{2/3} log{frac{m}{n}})$ work.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا