ترغب بنشر مسار تعليمي؟ اضغط هنا

A Simple 1-1/e Approximation for Oblivious Bipartite Matching

86   0   0.0 ( 0 )
 نشر من قبل Xiaowei Wu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the oblivious matching problem, which aims at finding a maximum matching on a graph with unknown edge set. Any algorithm for the problem specifies an ordering of the vertex pairs. The matching is then produced by probing the pairs following the ordering, and including a pair if both of them are unmatched and there exists an edge between them. The unweighted (Chan et al. (SICOMP 2018)) and the vertex-weighted (Chan et al. (TALG 2018



قيم البحث

اقرأ أيضاً

We introduce a weighted version of the ranking algorithm by Karp et al. (STOC 1990), and prove a competitive ratio of 0.6534 for the vertex-weighted online bipartite matching problem when online vertices arrive in random order. Our result shows that random arrivals help beating the 1-1/e barrier even in the vertex-weighted case. We build on the randomized primal-dual framework by Devanur et al. (SODA 2013) and design a two dimensional gain sharing function, which depends not only on the rank of the offline vertex, but also on the arrival time of the online vertex. To our knowledge, this is the first competitive ratio strictly larger than 1-1/e for an online bipartite matching problem achieved under the randomized primal-dual framework. Our algorithm has a natural interpretation that offline vertices offer a larger portion of their weights to the online vertices as time goes by, and each online vertex matches the neighbor with the highest offer at its arrival.
Let $G=(V, E)$ be a given edge-weighted graph and let its {em realization} $mathcal{G}$ be a random subgraph of $G$ that includes each edge $e in E$ independently with probability $p$. In the {em stochastic matching} problem, the goal is to pick a sp arse subgraph $Q$ of $G$ without knowing the realization $mathcal{G}$, such that the maximum weight matching among the realized edges of $Q$ (i.e. graph $Q cap mathcal{G}$) in expectation approximates the maximum weight matching of the whole realization $mathcal{G}$. In this paper, we prove that for any desirably small $epsilon in (0, 1)$, every graph $G$ has a subgraph $Q$ that guarantees a $(1-epsilon)$-approximation and has maximum degree only $O_{epsilon, p}(1)$. That is, the maximum degree of $Q$ depends only on $epsilon$ and $p$ (both of which are known to be necessary) and not for example on the number of nodes in $G$, the edge-weights, etc. The stochastic matching problem has been studied extensively on both weighted and unweighted graphs. Previously, only existence of (close to) half-approximate subgraphs was known for weighted graphs [Yamaguchi and Maehara, SODA18; Behnezhad et al., SODA19]. Our result substantially improves over these works, matches the state-of-the-art for unweighted graphs [Behnezhad et al., STOC20], and essentially settles the approximation factor.
Suppose that we are given an arbitrary graph $G=(V, E)$ and know that each edge in $E$ is going to be realized independently with some probability $p$. The goal in the stochastic matching problem is to pick a sparse subgraph $Q$ of $G$ such that the realized edges in $Q$, in expectation, include a matching that is approximately as large as the maximum matching among the realized edges of $G$. The maximum degree of $Q$ can depend on $p$, but not on the size of $G$. This problem has been subject to extensive studies over the years and the approximation factor has been improved from $0.5$ to $0.5001$ to $0.6568$ and eventually to $2/3$. In this work, we analyze a natural sampling-based algorithm and show that it can obtain all the way up to $(1-epsilon)$ approximation, for any constant $epsilon > 0$. A key and of possible independent interest component of our analysis is an algorithm that constructs a matching on a stochastic graph, which among some other important properties, guarantees that each vertex is matched independently from the vertices that are sufficiently far. This allows us to bypass a previously known barrier towards achieving $(1-epsilon)$ approximation based on existence of dense Ruzsa-Szemeredi graphs.
Online bipartite matching and its variants are among the most fundamental problems in the online algorithms literature. Karp, Vazirani, and Vazirani (STOC 1990) introduced an elegant algorithm for the unweighted problem that achieves an optimal compe titive ratio of $1-1/e$. Later, Aggarwal et al. (SODA 2011) generalized their algorithm and analysis to the vertex-weighted case. Little is known, however, about the most general edge-weighted problem aside from the trivial $1/2$-competitive greedy algorithm. In this paper, we present the first online algorithm that breaks the long-standing $1/2$ barrier and achieves a competitive ratio of at least $0.5086$. In light of the hardness result of Kapralov, Post, and Vondrak (SODA 2013) that restricts beating a $1/2$ competitive ratio for the more general problem of monotone submodular welfare maximization, our result can be seen as strong evidence that edge-weighted bipartite matching is strictly easier than submodular welfare maximization in the online setting. The main ingredient in our online matching algorithm is a novel subroutine called online correlated selection (OCS), which takes a sequence of pairs of vertices as input and selects one vertex from each pair. Instead of using a fresh random bit to choose a vertex from each pair, the OCS negatively correlates decisions across different pairs and provides a quantitative measure on the level of correlation. We believe our OCS technique is of independent interest and will find further applications in other online optimization problems.
Over three decades ago, Karp, Vazirani and Vazirani (STOC90) introduced the online bipartite matching problem. They observed that deterministic algorithms competitive ratio for this problem is no greater than $1/2$, and proved that randomized algorit hms can do better. A natural question thus arises: emph{how random is random}? i.e., how much randomness is needed to outperform deterministic algorithms? The textsc{ranking} algorithm of Karp et al.~requires $tilde{O}(n)$ random bits, which, ignoring polylog terms, remained unimproved. On the other hand, Pena and Borodin (TCS19) established a lower bound of $(1-o(1))loglog n$ random bits for any $1/2+Omega(1)$ competitive ratio. We close this doubly-exponential gap, proving that, surprisingly, the lower bound is tight. In fact, we prove a emph{sharp threshold} of $(1pm o(1))loglog n$ random bits for the randomness necessary and sufficient to outperform deterministic algorithms for this problem, as well as its vertex-weighted generalization. This implies the same threshold for the advice complexity (nondeterminism) of these problems. Similar to recent breakthroughs in the online matching literature, for edge-weighted matching (Fahrbach et al.~FOCS20) and adwords (Huang et al.~FOCS20), our algorithms break the barrier of $1/2$ by randomizing matching choices over two neighbors. Unlike these works, our approach does not rely on the recently-introduced OCS machinery, nor the more established randomized primal-dual method. Instead, our work revisits a highly-successful online design technique, which was nonetheless under-utilized in the area of online matching, namely (lossless) online rounding of fractional algorithms. While this technique is known to be hopeless for online matching in general, we show that it is nonetheless applicable to carefully designed fractional algorithms with additional (non-convex) constraints.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا