No Arabic abstract
The Micali-Vazirani (MV) algorithm for maximum cardinality matching in general graphs, which was published in 1980 cite{MV}, remains to this day the most efficient known algorithm for the problem. This paper gives the first complete and correct proof of this algorithm. Central to our proof are some purely graph-theoretic facts, capturing properties of minimum length alternating paths; these may be of independent interest. An attempt is made to render the algorithm easier to comprehend.
We propose a weighted common subgraph (WCS) matching algorithm to find the most similar subgraphs in two labeled weighted graphs. WCS matching, as a natural generalization of the equal-sized graph matching or subgraph matching, finds wide applications in many computer vision and machine learning tasks. In this paper, the WCS matching is first formulated as a combinatorial optimization problem over the set of partial permutation matrices. Then it is approximately solved by a recently proposed combinatorial optimization framework - Graduated NonConvexity and Concavity Procedure (GNCCP). Experimental comparisons on both synthetic graphs and real world images validate its robustness against noise level, problem size, outlier number, and edge density.
The maximum matching problem in dynamic graphs subject to edge updates (insertions and deletions) has received much attention over the last few years; a multitude of approximation/time tradeoffs were obtained, improving upon the folklore algorithm, which maintains a maximal (and hence $2$-approximate) matching in $O(n)$ worst-case update time in $n$-node graphs. We present the first deterministic algorithm which outperforms the folklore algorithm in terms of {em both} approximation ratio and worst-case update time. Specifically, we give a $(2-Omega(1))$-approximate algorithm with $O(sqrt{n}sqrt[8]{m})=O(n^{3/4})$ worst-case update time in $n$-node, $m$-edge graphs. For sufficiently small constant $epsilon>0$, no deterministic $(2+epsilon)$-approximate algorithm with worst-case update time $O(n^{0.99})$ was known. Our second result is the first deterministic $(2+epsilon)$-approximate (weighted) matching algorithm with $O_epsilon(1)cdot O(sqrt[4]{m}) = O_epsilon(1)cdot O(sqrt{n})$ worst-case update time.
The minimum degree algorithm is one of the most widely-used heuristics for reducing the cost of solving large sparse systems of linear equations. It has been studied for nearly half a century and has a rich history of bridging techniques from data structures, graph algorithms, and scientific computing. In this paper, we present a simple but novel combinatorial algorithm for computing an exact minimum degree elimination ordering in $O(nm)$ time, which improves on the best known time complexity of $O(n^3)$ and offers practical improvements for sparse systems with small values of $m$. Our approach leverages a careful amortized analysis, which also allows us to derive output-sensitive bounds for the running time of $O(min{msqrt{m^+}, Delta m^+} log n)$, where $m^+$ is the number of unique fill edges and original edges that the algorithm encounters and $Delta$ is the maximum degree of the input graph. Furthermore, we show there cannot exist an exact minimum degree algorithm that runs in $O(nm^{1-varepsilon})$ time, for any $varepsilon > 0$, assuming the strong exponential time hypothesis. This fine-grained reduction goes through the orthogonal vectors problem and uses a new low-degree graph construction called $U$-fillers, which act as pathological inputs and cause any minimum degree algorithm to exhibit nearly worst-case performance. With these two results, we nearly characterize the time complexity of computing an exact minimum degree ordering.
We design new serial and parallel approximation algorithms for computing a maximum weight $b$-matching in an edge-weighted graph with a submodular objective function. This problem is NP-hard; the new algorithms have approximation ratio $1/3$, and are relaxations of the Greedy algorithm that rely only on local information in the graph, making them parallelizable. We have designed and implemented Local Lazy Greedy algorithms for both serial and parallel computers. We have applied the approximate submodular $b$-matching algorithm to assign tasks to processors in the computation of Fock matrices in quantum chemistry on parallel computers. The assignment seeks to reduce the run time by balancing the computational load on the processors and bounding the number of messages that each processor sends. We show that the new assignment of tasks to processors provides a four fold speedup over the currently used assignment in the NWChemEx software on $8000$ processors on the Summit supercomputer at Oak Ridge National Lab.
In a minimum cost submodular cover problem (MinSMC), given a monotone non-decreasing submodular function $fcolon 2^V rightarrow mathbb{Z}^+$, a cost function $c: Vrightarrow mathbb R^{+}$, an integer $kleq f(V)$, the goal is to find a subset $Asubseteq V$ with the minimum cost such that $f(A)geq k$. MinSMC has a lot of applications in machine learning and data mining. In this paper, we design a parallel algorithm for MinSMC which obtains a solution with approximation ratio at most $frac{H(min{Delta,k})}{1-5varepsilon}$ with probability $1-3varepsilon$ in $O(frac{log mlog nlog^2 mn}{varepsilon^4})$ rounds, where $Delta=max_{vin V}f(v)$, $H(cdot)$ is the Hamornic number, $n=f(V)$, $m=|V|$ and $varepsilon$ is a constant in $(0,frac{1}{5})$. This is the first paper obtaining a parallel algorithm for the weighted version of the MinSMC problem with an approximation ratio arbitrarily close to $H(min{Delta,k})$.