Do you want to publish a course? Click here

Solving Tall Dense Linear Programs in Nearly Linear Time

98   0   0.0 ( 0 )
 Added by Jan van den Brand
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this paper we provide an $tilde{O}(nd+d^{3})$ time randomized algorithm for solving linear programs with $d$ variables and $n$ constraints with high probability. To obtain this result we provide a robust, primal-dual $tilde{O}(sqrt{d})$-iteration interior point method inspired by the methods of Lee and Sidford (2014, 2019) and show how to efficiently implement this method using new data-structures based on heavy-hitters, the Johnson-Lindenstrauss lemma, and inverse maintenance. Interestingly, we obtain this running time without using fast matrix multiplication and consequently, barring a major advance in linear system solving, our running time is near optimal for solving dense linear programs among algorithms that do not use fast matrix multiplication.

rate research

Read More

We present an $tilde O(m+n^{1.5})$-time randomized algorithm for maximum cardinality bipartite matching and related problems (e.g. transshipment, negative-weight shortest paths, and optimal transport) on $m$-edge, $n$-node graphs. For maximum cardinality bipartite matching on moderately dense graphs, i.e. $m = Omega(n^{1.5})$, our algorithm runs in time nearly linear in the input size and constitutes the first improvement over the classic $O(msqrt{n})$-time [Dinic 1970; Hopcroft-Karp 1971; Karzanov 1973] and $tilde O(n^omega)$-time algorithms [Ibarra-Moran 1981] (where currently $omegaapprox 2.373$). On sparser graphs, i.e. when $m = n^{9/8 + delta}$ for any constant $delta>0$, our result improves upon the recent advances of [Madry 2013] and [Liu-Sidford 2020b, 2020a] which achieve an $tilde O(m^{4/3+o(1)})$ runtime. We obtain these results by combining and advancing recent lines of research in interior point methods (IPMs) and dynamic graph algorithms. First, we simplify and improve the IPM of [v.d.Brand-Lee-Sidford-Song 2020], providing a general primal-dual IPM framework and new sampling-based techniques for handling infeasibility induced by approximate linear system solvers. Second, we provide a simple sublinear-time algorithm for detecting and sampling high-energy edges in electric flows on expanders and show that when combined with recent advances in dynamic expander decompositions, this yields efficient data structures for maintaining the iterates of both [v.d.Brand et al.] and our new IPMs. Combining this general machinery yields a simpler $tilde O(n sqrt{m})$ time algorithm for matching based on the logarithmic barrier function, and our state-of-the-art $tilde O(m+n^{1.5})$ time algorithm for matching based on the [Lee-Sidford 2014] barrier (as regularized in [v.d.Brand et al.]).
In this paper we provide new randomized algorithms with improved runtimes for solving linear programs with two-sided constraints. In the special case of the minimum cost flow problem on $n$-vertex $m$-edge graphs with integer polynomially-bounded costs and capacities we obtain a randomized method which solves the problem in $tilde{O}(m+n^{1.5})$ time. This improves upon the previous best runtime of $tilde{O}(msqrt{n})$ (Lee-Sidford 2014) and, in the special case of unit-capacity maximum flow, improves upon the previous best runtimes of $m^{4/3+o(1)}$ (Liu-Sidford 2020, Kathuria 2020) and $tilde{O}(msqrt{n})$ (Lee-Sidford 2014) for sufficiently dense graphs. For $ell_1$-regression in a matrix with $n$-columns and $m$-rows we obtain a randomized method which computes an $epsilon$-approximate solution in $tilde{O}(mn+n^{2.5})$ time. This yields a randomized method which computes an $epsilon$-optimal policy of a discounted Markov Decision Process with $S$ states and $A$ actions per state in time $tilde{O}(S^2A+S^{2.5})$. These methods improve upon the previous best runtimes of methods which depend polylogarithmically on problem parameters, which were $tilde{O}(mn^{1.5})$ (Lee-Sidford 2015) and $tilde{O}(S^{2.5}A)$ (Lee-Sidford 2014, Sidford-Wang-Wu-Ye 2018). To obtain this result we introduce two new algorithmic tools of independent interest. First, we design a new general interior point method for solving linear programs with two sided constraints which combines techniques from (Lee-Song-Zhang 2019, Brand et al. 2020) to obtain a robust stochastic method with iteration count nearly the square root of the smaller dimension. Second, to implement this method we provide dynamic data structures for efficiently maintaining approximations to variants of Lewis-weights, a fundamental importance measure for matrices which generalize leverage scores and effective resistances.
In this paper we show how to recover a spectral approximations to broad classes of structured matrices using only a polylogarithmic number of adaptive linear measurements to either the matrix or its inverse. Leveraging this result we obtain faster algorithms for variety of linear algebraic problems. Key results include: $bullet$ A nearly linear time algorithm for solving the inverse of symmetric $M$-matrices, a strict superset of Laplacians and SDD matrices. $bullet$ An $tilde{O}(n^2)$ time algorithm for solving $n times n$ linear systems that are constant spectral approximations of Laplacians or more generally, SDD matrices. $bullet$ An $tilde{O}(n^2)$ algorithm to recover a spectral approximation of a $n$-vertex graph using only $tilde{O}(1)$ matrix-vector multiplies with its Laplacian matrix. The previous best results for each problem either used a trivial number of queries to exactly recover the matrix or a trivial $O(n^omega)$ running time, where $omega$ is the matrix multiplication constant. We achieve these results by generalizing recent semidefinite programming based linear sized sparsifier results of Lee and Sun (2017) and providing iterative methods inspired by the semistreaming sparsification results of Kapralov, Lee, Musco, Musco and Sidford (2014) and input sparsity time linear system solving results of Li, Miller, and Peng (2013). We hope that by initiating study of these natural problems, expanding the robustness and scope of recent nearly linear time linear system solving research, and providing general matrix recovery machinery this work may serve as a stepping stone for faster algorithms.
We give an approximation algorithm for packing and covering linear programs (linear programs with non-negative coefficients). Given a constraint matrix with n non-zeros, r rows, and c columns, the algorithm computes feasible primal and dual solutions whose costs are within a factor of 1+eps of the optimal cost in time O((r+c)log(n)/eps^2 + n).
We show how to solve directed Laplacian systems in nearly-linear time. Given a linear system in an $n times n$ Eulerian directed Laplacian with $m$ nonzero entries, we show how to compute an $epsilon$-approximate solution in time $O(m log^{O(1)} (n) log (1/epsilon))$. Through reductions from [Cohen et al. FOCS16] , this gives the first nearly-linear time algorithms for computing $epsilon$-approximate solutions to row or column diagonally dominant linear systems (including arbitrary directed Laplacians) and computing $epsilon$-approximations to various properties of random walks on directed graphs, including stationary distributions, personalized PageRank vectors, hitting times, and escape probabilities. These bounds improve upon the recent almost-linear algorithms of [Cohen et al. STOC17], which gave an algorithm to solve Eulerian Laplacian systems in time $O((m+n2^{O(sqrt{log n log log n})})log^{O(1)}(n epsilon^{-1}))$. To achieve our results, we provide a structural result that we believe is of independent interest. We show that Laplacians of all strongly connected directed graphs have sparse approximate LU-factorizations. That is, for every such directed Laplacian $ {mathbf{L}}$, there is a lower triangular matrix $boldsymbol{mathit{{mathfrak{L}}}}$ and an upper triangular matrix $boldsymbol{mathit{{mathfrak{U}}}}$, each with at most $tilde{O}(n)$ nonzero entries, such that their product $boldsymbol{mathit{{mathfrak{L}}}} boldsymbol{mathit{{mathfrak{U}}}}$ spectrally approximates $ {mathbf{L}}$ in an appropriate norm. This claim can be viewed as an analogue of recent work on sparse Cholesky factorizations of Laplacians of undirected graphs. We show how to construct such factorizations in nearly-linear time and prove that, once constructed, they yield nearly-linear time algorithms for solving directed Laplacian systems.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا