ترغب بنشر مسار تعليمي؟ اضغط هنا

We give lower bounds on the performance of two of the most popular sampling methods in practice, the Metropolis-adjusted Langevin algorithm (MALA) and multi-step Hamiltonian Monte Carlo (HMC) with a leapfrog integrator, when applied to well-condition ed distributions. Our main result is a nearly-tight lower bound of $widetilde{Omega}(kappa d)$ on the mixing time of MALA from an exponentially warm start, matching a line of algorithmic results up to logarithmic factors and answering an open question of Chewi et. al. We also show that a polynomial dependence on dimension is necessary for the relaxation time of HMC under any number of leapfrog steps, and bound the gains achievable by changing the step count. Our HMC analysis draws upon a novel connection between leapfrog integration and Chebyshev polynomials, which may be of independent interest.
We give a fast algorithm to optimally compose privacy guarantees of differentially private (DP) algorithms to arbitrary accuracy. Our method is based on the notion of privacy loss random variables to quantify the privacy loss of DP algorithms. The ru nning time and memory needed for our algorithm to approximate the privacy curve of a DP algorithm composed with itself $k$ times is $tilde{O}(sqrt{k})$. This improves over the best prior method by Koskela et al. (2020) which requires $tilde{Omega}(k^{1.5})$ running time. We demonstrate the utility of our algorithm by accurately computing the privacy loss of DP-SGD algorithm of Abadi et al. (2016) and showing that our algorithm speeds up the privacy computations by a few orders of magnitude compared to prior work, while maintaining similar accuracy.
We study the differentially private Empirical Risk Minimization (ERM) and Stochastic Convex Optimization (SCO) problems for non-smooth convex functions. We get a (nearly) optimal bound on the excess empirical risk and excess population loss with subq uadratic gradient complexity. More precisely, our differentially private algorithm requires $O(frac{N^{3/2}}{d^{1/8}}+ frac{N^2}{d})$ gradient queries for optimal excess empirical risk, which is achieved with the help of subsampling and smoothing the function via convolution. This is the first subquadratic algorithm for the non-smooth case when $d$ is super constant. As a direct application, using the iterative localization approach of Feldman et al. cite{fkt20}, we achieve the optimal excess population loss for stochastic convex optimization problem, with $O(min{N^{5/4}d^{1/8},frac{ N^{3/2}}{d^{1/8}}})$ gradient queries. Our work makes progress towards resolving a question raised by Bassily et al. cite{bfgt20}, giving first algorithms for private ERM and SCO with subquadratic steps. We note that independently Asi et al. cite{afkt21} gave other algorithms for private ERM and SCO with subquadratic steps.
In this paper we provide new randomized algorithms with improved runtimes for solving linear programs with two-sided constraints. In the special case of the minimum cost flow problem on $n$-vertex $m$-edge graphs with integer polynomially-bounded cos ts and capacities we obtain a randomized method which solves the problem in $tilde{O}(m+n^{1.5})$ time. This improves upon the previous best runtime of $tilde{O}(msqrt{n})$ (Lee-Sidford 2014) and, in the special case of unit-capacity maximum flow, improves upon the previous best runtimes of $m^{4/3+o(1)}$ (Liu-Sidford 2020, Kathuria 2020) and $tilde{O}(msqrt{n})$ (Lee-Sidford 2014) for sufficiently dense graphs. For $ell_1$-regression in a matrix with $n$-columns and $m$-rows we obtain a randomized method which computes an $epsilon$-approximate solution in $tilde{O}(mn+n^{2.5})$ time. This yields a randomized method which computes an $epsilon$-optimal policy of a discounted Markov Decision Process with $S$ states and $A$ actions per state in time $tilde{O}(S^2A+S^{2.5})$. These methods improve upon the previous best runtimes of methods which depend polylogarithmically on problem parameters, which were $tilde{O}(mn^{1.5})$ (Lee-Sidford 2015) and $tilde{O}(S^{2.5}A)$ (Lee-Sidford 2014, Sidford-Wang-Wu-Ye 2018). To obtain this result we introduce two new algorithmic tools of independent interest. First, we design a new general interior point method for solving linear programs with two sided constraints which combines techniques from (Lee-Song-Zhang 2019, Brand et al. 2020) to obtain a robust stochastic method with iteration count nearly the square root of the smaller dimension. Second, to implement this method we provide dynamic data structures for efficiently maintaining approximations to variants of Lewis-weights, a fundamental importance measure for matrices which generalize leverage scores and effective resistances.
Semidefinite programs (SDPs) are a fundamental class of optimization problems with important recent applications in approximation algorithms, quantum complexity, robust learning, algorithmic rounding, and adversarial deep learning. This paper present s a faster interior point method to solve generic SDPs with variable size $n times n$ and $m$ constraints in time begin{align*} widetilde{O}(sqrt{n}( mn^2 + m^omega + n^omega) log(1 / epsilon) ), end{align*} where $omega$ is the exponent of matrix multiplication and $epsilon$ is the relative accuracy. In the predominant case of $m geq n$, our runtime outperforms that of the previous fastest SDP solver, which is based on the cutting plane method of Jiang, Lee, Song, and Wong [JLSW20]. Our algorithms runtime can be naturally interpreted as follows: $widetilde{O}(sqrt{n} log (1/epsilon))$ is the number of iterations needed for our interior point method, $mn^2$ is the input size, and $m^omega + n^omega$ is the time to invert the Hessian and slack matrix in each iteration. These constitute natural barriers to further improving the runtime of interior point methods for solving generic SDPs.
We give the first approximation algorithm for mixed packing and covering semidefinite programs (SDPs) with polylogarithmic dependence on width. Mixed packing and covering SDPs constitute a fundamental algorithmic primitive with recent applications in combinatorial optimization, robust learning, and quantum complexity. The current approximate solvers for positive semidefinite programming can handle only pure packing instances, and technical hurdles prevent their generalization to a wider class of positive instances. For a given multiplicative accuracy of $epsilon$, our algorithm takes $O(log^3(ndrho) cdot epsilon^{-3})$ parallelizable iterations, where $n$, $d$ are dimensions of the problem and $rho$ is a width parameter of the instance, generalizing or improving all previous parallel algorithms in the positive linear and semidefinite programming literature. When specialized to pure packing SDPs, our algorithms iteration complexity is $O(log^2 (nd) cdot epsilon^{-2})$, a slight improvement and derandomization of the state-of-the-art (Allen-Zhu et. al. 16, Peng et. al. 16, Wang et. al. 15). For a wide variety of structured instances commonly found in applications, the iterations of our algorithm run in nearly-linear time. In doing so, we give matrix analytic techniques for overcoming obstacles that have stymied prior approaches to this open problem, as stated in past works (Peng et. al. 16, Mahoney et. al. 16). Crucial to our analysis are a simplification of existing algorithms for mixed positive linear programs, achieved by removing an asymmetry caused by modifying covering constraints, and a suite of matrix inequalities whose proofs are based on analyzing the Schur complements of matrices in a higher dimension. We hope that both our algorithm and techniques open the door to improved solvers for positive semidefinite programming, as well as its applications.
In this paper we provide an $tilde{O}(nd+d^{3})$ time randomized algorithm for solving linear programs with $d$ variables and $n$ constraints with high probability. To obtain this result we provide a robust, primal-dual $tilde{O}(sqrt{d})$-iteration interior point method inspired by the methods of Lee and Sidford (2014, 2019) and show how to efficiently implement this method using new data-structures based on heavy-hitters, the Johnson-Lindenstrauss lemma, and inverse maintenance. Interestingly, we obtain this running time without using fast matrix multiplication and consequently, barring a major advance in linear system solving, our running time is near optimal for solving dense linear programs among algorithms that do not use fast matrix multiplication.
The central limit theorem for convex bodies says that with high probability the marginal of an isotropic log-concave distribution along a random direction is close to a Gaussian, with the quantitative difference determined asymptotically by the Cheeg er/Poincare/KLS constant. Here we propose a generalized CLT for marginals along random directions drawn from any isotropic log-concave distribution; namely, for $x,y$ drawn independently from isotropic log-concave densities $p,q$, the random variable $langle x,yrangle$ is close to Gaussian. Our main result is that this generalized CLT is quantitatively equivalent (up to a small factor) to the KLS conjecture. Any polynomial improvement in the current KLS bound of $n^{1/4}$ in $mathbb{R}^n$ implies the generalized CLT, and vice versa. This tight connection suggests that the generalized CLT might provide insight into basic open questions in asymptotic convex geometry.
Submodular function minimization (SFM) is a fundamental discrete optimization problem which generalizes many well known problems, has applications in various fields, and can be solved in polynomial time. Owing to applications in computer vision and m achine learning, fast SFM algorithms are highly desirable. The current fastest algorithms [Lee, Sidford, Wong, FOCS 2015] run in $O(n^{2}log nMcdottextrm{EO} +n^{3}log^{O(1)}nM)$ time and $O(n^{3}log^{2}ncdot textrm{EO} +n^{4}log^{O(1)}n$) time respectively, where $M$ is the largest absolute value of the function (assuming the range is integers) and $textrm{EO}$ is the time taken to evaluate the function on any set. Although the best known lower bound on the query complexity is only $Omega(n)$, the current shortest non-deterministic proof certifying the optimum value of a function requires $Theta(n^{2})$ function evaluations. The main contribution of this paper are subquadratic SFM algorithms. For integer-valued submodular functions, we give an SFM algorithm which runs in $O(nM^{3}log ncdottextrm{EO})$ time giving the first nearly linear time algorithm in any known regime. For real-valued submodular functions with range in $[-1,1]$, we give an algorithm which in $tilde{O}(n^{5/3}cdottextrm{EO}/varepsilon^{2})$ time returns an $varepsilon$-additive approximate solution. At the heart of it, our algorithms are projected stochastic subgradient descent methods on the Lovasz extension of submodular functions where we crucially exploit submodularity and data structures to obtain fast, i.e. sublinear time subgradient updates. . The latter is crucial for beating the $n^{2}$ bound as we show that algorithms which access only subgradients of the Lovasz extension, and these include the theoretically best algorithms mentioned above, must make $Omega(n)$ subgradient calls (even for functions whose range is ${-1,0,1}$).
We present the first single pass algorithm for computing spectral sparsifiers of graphs in the dynamic semi-streaming model. Given a single pass over a stream containing insertions and deletions of edges to a graph G, our algorithm maintains a random ized linear sketch of the incidence matrix of G into dimension O((1/epsilon^2) n polylog(n)). Using this sketch, at any point, the algorithm can output a (1 +/- epsilon) spectral sparsifier for G with high probability. While O((1/epsilon^2) n polylog(n)) space algorithms are known for computing cut sparsifiers in dynamic streams [AGM12b, GKP12] and spectral sparsifiers in insertion-only streams [KL11], prior to our work, the best known single pass algorithm for maintaining spectral sparsifiers in dynamic streams required sketches of dimension Omega((1/epsilon^2) n^(5/3)) [AGM14]. To achieve our result, we show that, using a coarse sparsifier of G and a linear sketch of Gs incidence matrix, it is possible to sample edges by effective resistance, obtaining a spectral sparsifier of arbitrary precision. Sampling from the sketch requires a novel application of ell_2/ell_2 sparse recovery, a natural extension of the ell_0 methods used for cut sparsifiers in [AGM12b]. Recent work of [MP12] on row sampling for matrix approximation gives a recursive approach for obtaining the required coarse sparsifiers. Under certain restrictions, our approach also extends to the problem of maintaining a spectral approximation for a general matrix A^T A given a stream of updates to rows in A.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا