ترغب بنشر مسار تعليمي؟ اضغط هنا

Matching is as Easy as the Decision Problem, in the NC Model

91   0   0.0 ( 0 )
 نشر من قبل Nima Anari
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Is matching in NC, i.e., is there a deterministic fast parallel algorithm for it? This has been an outstanding open question in TCS for over three decades, ever since the discovery of randomized NC matching algorithms [KUW85, MVV87]. Over the last five years, the theoretical computer science community has launched a relentless attack on this question, leading to the discovery of several powerful ideas. We give what appears to be the culmination of this line of work: An NC algorithm for finding a minimum-weight perfect matching in a general graph with polynomially bounded edge weights, provided it is given an oracle for the decision problem. Consequently, for settling the main open problem, it suffices to obtain an NC algorithm for the decision problem. We believe this new fact has qualitatively changed the nature of this open problem. All known efficient matching algorithms for general graphs follow one of two approaches: given by Edmonds [Edm65] and Lovasz [Lov79]. Our oracle-based algorithm follows a new approach and uses many of the ideas discovered in the last five years. The difficulty of obtaining an NC perfect matching algorithm led researchers to study matching vis-a-vis clever relaxations of the class NC. In this vein, recently Goldwasser and Grossman [GG15] gave a pseudo-deterministic RNC algorithm for finding a perfect matching in a bipartite graph, i.e., an RNC algorithm with the additional requirement that on the same graph, it should return the same (i.e., unique) perfect matching for almost all choices of random bits. A corollary of our reduction is an analogous algorithm for general graphs.

قيم البحث

اقرأ أيضاً

We investigate the single source shortest distance (SSSD) and all pairs shortest distance (APSD) problems as enumeration problems (on unweighted and integer weighted graphs), meaning that the elements $(u, v, d(u, v))$ -- where $u$ and $v$ are vertic es with shortest distance $d(u, v)$ -- are produced and listed one by one without repetition. The performance is measured in the RAM model of computation with respect to preprocessing time and delay, i.e., the maximum time that elapses between two consecutive outputs. This point of view reveals that specific types of output (e.g., excluding the non-reachable pairs $(u, v, infty)$, or excluding the self-distances $(u, u, 0)$) and the order of enumeration (e.g., sorted by distance, sorted row-wise with respect to the distance matrix) have a huge impact on the complexity of APSD while they appear to have no effect on SSSD. In particular, we show for APSD that enumeration without output restrictions is possible with delay in the order of the average degree. Excluding non-reachable pairs, or requesting the output to be sorted by distance, increases this delay to the order of the maximum degree. Further, for weighted graphs, a delay in the order of the average degree is also not possible without preprocessing or considering self-distances as output. In contrast, for SSSD we find that a delay in the order of the maximum degree without preprocessing is attainable and unavoidable for any of these requirements.
111 - Piotr Sankowski 2017
Consider a planar graph $G=(V,E)$ with polynomially bounded edge weight function $w:Eto [0, poly(n)]$. The main results of this paper are NC algorithms for the following problems: - minimum weight perfect matching in $G$, - maximum cardinality an d maximum weight matching in $G$ when $G$ is bipartite, - maximum multiple-source multiple-sink flow in $G$ where $c:Eto [1, poly(n)]$ is a polynomially bounded edge capacity function, - minimum weight $f$-factor in $G$ where $f:Vto [1, poly(n)]$, - min-cost flow in $G$ where $c:Eto [1, poly(n)]$ is a polynomially bounded edge capacity function and $b:Vto [1, poly(n)]$ is a polynomially bounded vertex demand function. There have been no known NC algorithms for any of these problems previously (Before this and independent paper by Anari and Vazirani). In order to solve these problems we develop a new relatively simple but versatile framework that is combinatorial in spirit. It handles the combinatorial structure of matchings directly and needs to only know weights of appropriately defined matchings from algebraic subroutines.
Suppose that we are given an arbitrary graph $G=(V, E)$ and know that each edge in $E$ is going to be realized independently with some probability $p$. The goal in the stochastic matching problem is to pick a sparse subgraph $Q$ of $G$ such that the realized edges in $Q$, in expectation, include a matching that is approximately as large as the maximum matching among the realized edges of $G$. The maximum degree of $Q$ can depend on $p$, but not on the size of $G$. This problem has been subject to extensive studies over the years and the approximation factor has been improved from $0.5$ to $0.5001$ to $0.6568$ and eventually to $2/3$. In this work, we analyze a natural sampling-based algorithm and show that it can obtain all the way up to $(1-epsilon)$ approximation, for any constant $epsilon > 0$. A key and of possible independent interest component of our analysis is an algorithm that constructs a matching on a stochastic graph, which among some other important properties, guarantees that each vertex is matched independently from the vertices that are sufficiently far. This allows us to bypass a previously known barrier towards achieving $(1-epsilon)$ approximation based on existence of dense Ruzsa-Szemeredi graphs.
We give fault-tolerant algorithms for establishing synchrony in distributed systems in which each of the $n$ nodes has its own clock. Our algorithms operate in a very strong fault model: we require self-stabilisation, i.e., the initial state of the s ystem may be arbitrary, and there can be up to $f<n/3$ ongoing Byzantine faults, i.e., nodes that deviate from the protocol in an arbitrary manner. Furthermore, we assume that the local clocks of the nodes may progress at different speeds (clock drift) and communication has bounded delay. In this model, we study the pulse synchronisation problem, where the task is to guarantee that eventually all correct nodes generate well-separated local pulse events (i.e., unlabelled logical clock ticks) in a synchronised manner. Compared to prior work, we achieve exponential improvements in stabilisation time and the number of communicated bits, and give the first sublinear-time algorithm for the problem: - In the deterministic setting, the state-of-the-art solutions stabilise in time $Theta(f)$ and have each node broadcast $Theta(f log f)$ bits per time unit. We exponentially reduce the number of bits broadcasted per time unit to $Theta(log f)$ while retaining the same stabilisation time. - In the randomised setting, the state-of-the-art solutions stabilise in time $Theta(f)$ and have each node broadcast $O(1)$ bits per time unit. We exponentially reduce the stabilisation time to $log^{O(1)} f$ while each node broadcasts $log^{O(1)} f$ bits per time unit. These results are obtained by means of a recursive approach reducing the above task of self-stabilising pulse synchronisation in the bounded-delay model to non-self-stabilising binary consensus in the synchronous model. In general, our approach introduces at most logarithmic overheads in terms of stabilisation time and broadcasted bits over the underlying consensus routine.
Stochastic compositional optimization generalizes classic (non-compositional) stochastic optimization to the minimization of compositions of functions. Each composition may introduce an additional expectation. The series of expectations may be nested . Stochastic compositional optimization is gaining popularity in applications such as reinforcement learning and meta learning. This paper presents a new Stochastically Corrected Stochastic Compositional gradient method (SCSC). SCSC runs in a single-time scale with a single loop, uses a fixed batch size, and guarantees to converge at the same rate as the stochastic gradient descent (SGD) method for non-compositional stochastic optimization. This is achieved by making a careful improvement to a popular stochastic compositional gradient method. It is easy to apply SGD-improvement techniques to accelerate SCSC. This helps SCSC achieve state-of-the-art performance for stochastic compositional optimization. In particular, we apply Adam to SCSC, and the exhibited rate of convergence matches that of the original Adam on non-compositional stochastic optimization. We test SCSC using the portfolio management and model-agnostic meta-learning tasks.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا