ترغب بنشر مسار تعليمي؟ اضغط هنا

Sensitivity Analysis of the Maximum Matching Problem

89   0   0.0 ( 0 )
 نشر من قبل Samson Zhou
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the sensitivity of algorithms for the maximum matching problem against edge and vertex modifications. Algorithms with low sensitivity are desirable because they are robust to edge failure or attack. In this work, we show a randomized $(1-epsilon)$-approximation algorithm with worst-case sensitivity $O_{epsilon}(1)$, which substantially improves upon the $(1-epsilon)$-approximation algorithm of Varma and Yoshida (arXiv 2020) that obtains average sensitivity $n^{O(1/(1+epsilon^2))}$ sensitivity algorithm, and show a deterministic $1/2$-approximation algorithm with sensitivity $exp(O(log^*n))$ for bounded-degree graphs. We show that any deterministic constant-factor approximation algorithm must have sensitivity $Omega(log^* n)$. Our results imply that randomized algorithms are strictly more powerful than deterministic ones in that the former can achieve sensitivity independent of $n$ whereas the latter cannot. We also show analogous results for vertex sensitivity, where we remove a vertex instead of an edge. As an application of our results, we give an algorithm for the online maximum matching with $O_{epsilon}(n)$ total replacements in the vertex-arrival model. By comparison, Bernstein et al. (J. ACM 2019) gave an online algorithm that always outputs the maximum matching, but only for bipartite graphs and with $O(nlog n)$ total replacements. Finally, we introduce the notion of normalized weighted sensitivity, a natural generalization of sensitivity that accounts for the weights of deleted edges. We show that if all edges in a graph have polynomially bounded weight, then given a trade-off parameter $alpha>2$, there exists an algorithm that outputs a $frac{1}{4alpha}$-approximation to the maximum weighted matching in $O(mlog_{alpha} n)$ time, with normalized weighted sensitivity $O(1)$. See paper for full abstract.



قيم البحث

اقرأ أيضاً

137 - Kook Jin Ahn , Sudipto Guha 2011
In this paper, we study linear programming based approaches to the maximum matching problem in the semi-streaming model. The semi-streaming model has gained attention as a model for processing massive graphs as the importance of such graphs has incre ased. This is a model where edges are streamed-in in an adversarial order and we are allowed a space proportional to the number of vertices in a graph. In recent years, there has been several new results in this semi-streaming model. However broad techniques such as linear programming have not been adapted to this model. We present several techniques to adapt and optimize linear programming based approaches in the semi-streaming model with an application to the maximum matching problem. As a consequence, we improve (almost) all previous results on this problem, and also prove new results on interesting variants.
122 - Kook Jin Ahn , Sudipto Guha 2011
In this paper, we study the non-bipartite maximum matching problem in the semi-streaming model. The maximum matching problem in the semi-streaming model has received a significant amount of attention lately. While the problem has been somewhat well s olved for bipartite graphs, the known algorithms for non-bipartite graphs use $2^{frac1epsilon}$ passes or $n^{frac1epsilon}$ time to compute a $(1-epsilon)$ approximation. In this paper we provide the first FPTAS (polynomial in $n,frac1epsilon$) for the problem which is efficient in both the running time and the number of passes. We also show that we can estimate the size of the matching in $O(frac1epsilon)$ passes using slightly superlinear space. To achieve both results, we use the structural properties of the matching polytope such as the laminarity of the tight sets and total dual integrality. The algorithms are iterative, and are based on the fractional packing and covering framework. However the formulations herein require exponentially many variables or constraints. We use laminarity, metric embeddings and graph sparsification to reduce the space required by the algorithms in between and across the iterations. This is the first use of these ideas in the semi-streaming model to solve a combinatorial optimization problem.
110 - Moran Feldman , Ariel Szarf 2021
The problem of finding a maximum size matching in a graph (known as the maximum matching problem) is one of the most classical problems in computer science. Despite a significant body of work dedicated to the study of this problem in the data stream model, the state-of-the-art single-pass semi-streaming algorithm for it is still a simple greedy algorithm that computes a maximal matching, and this way obtains 1/2-approximation. Some previous works described two/three-pass algorithms that improve over this approximation ratio by using their second and third passes to improve the above mentioned maximal matching. One contribution of this paper continuous this line of work by presenting new three-pass semi-streaming algorithms that work along these lines and obtain improved approximation ratios of 0.6111 and 0.5694 for triangle-free and general graphs, respectively. Unfortunately, a recent work (Konrad and Naidu, 2021) shows that the strategy of constructing a maximal matching in the first pass and then improving it in further passes has limitations. Additionally, this technique is unlikely to get us closer to single-pass semi-streaming algorithms obtaining a better than 1/2-approximation. Therefore, it is interesting to come up with algorithms that do something else with their first pass (we term such algorithms non-maximal-matching-first algorithms). No such algorithms are currently known (to the best of our knowledge), and the main contribution of this paper is describing such algorithms that obtain approximation ratios of 0.5384 and 0.5555 in two and three passes, respectively, for general graphs (the result for three passes improves over the previous state-of-the-art, but is worse than the result of this paper mentioned in the previous paragraph for general graphs).
We study the problem of matching agents who arrive at a marketplace over time and leave after d time periods. Agents can only be matched while they are present in the marketplace. Each pair of agents can yield a different match value, and the planner s goal is to maximize the total value over a finite time horizon. First we study the case in which vertices arrive in an adversarial order. We provide a randomized 0.25-competitive algorithm building on a result by Feldman et al. (2009) and Lehman et al. (2006). We extend the model to the case in which departure times are drawn independently from a distribution with non-decreasing hazard rate, for which we establish a 1/8-competitive algorithm. When the arrival order is chosen uniformly at random, we show that a batching algorithm, which computes a maximum-weighted matching every (d+1) periods, is 0.279-competitive.
The optimized assignment of staff is of great significance for improving the production efficiency of the society. For specific tasks, the key to optimizing staffing is personnel scheduling. The assignment problem is classical in the personnel schedu ling. In this paper, we abstract it as an optimal matching model of a bipartite graph and propose the Ultimate Hungarian Algorithm(UHA). By introducing feasible labels, iteratively searching for the augmenting path to get the optimal match(maximum-weight matching). And we compare the algorithm with the traditional brute force method, then conclude that our algorithm has lower time complexity and can solve the problems of maximum-weight matching more effectively.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا