Do you want to publish a course? Click here

New Computational Guarantees for Solving Convex Optimization Problems with First Order Methods, via a Function Growth Condition Measure

339   0   0.0 ( 0 )
 Added by Robert Freund
 Publication date 2015
  fields
and research's language is English




Ask ChatGPT about the research

Motivated by recent work of Renegar, we present new computational methods and associated computational guarantees for solving convex optimization problems using first-order methods. Our problem of interest is the general convex optimization problem $f^* = min_{x in Q} f(x)$, where we presume knowledge of a strict lower bound $f_{mathrm{slb}} < f^*$. [Indeed, $f_{mathrm{slb}}$ is naturally known when optimizing many loss functions in statistics and machine learning (least-squares, logistic loss, exponential loss, total variation loss, etc.) as well as in Renegars transformed version of the standard conic optimization problem; in all these cases one has $f_{mathrm{slb}} = 0 < f^*$.] We introduce a new functional measure called the growth constant $G$ for $f(cdot)$, that measures how quickly the level sets of $f(cdot)$ grow relative to the function value, and that plays a fundamental role in the complexity analysis. When $f(cdot)$ is non-smooth, we present new computational guarantees for the Subgradient Descent Method and for smoothing methods, that can improve existing computational guarantees in several ways, most notably when the initial iterate $x^0$ is far from the optimal solution set. When $f(cdot)$ is smooth, we present a scheme for periodically restarting the Accelerated Gradient Method that can also improve existing computational guarantees when $x^0$ is far from the optimal solution set, and in the presence of added structure we present a scheme using parametrically increased smoothing that further improves the associated computational guarantees.



rate research

Read More

We propose first-order methods based on a level-set technique for convex constrained optimization that satisfies an error bound condition with unknown growth parameters. The proposed approach solves the original problem by solving a sequence of unconstrained subproblems defined with different level parameters. Different from the existing level-set methods where the subproblems are solved sequentially, our method applies a first-order method to solve each subproblem independently and simultaneously, which can be implemented with either a single or multiple processors. Once the objective value of one subproblem is reduced by a constant factor, a sequential restart is performed to update the level parameters and restart the first-order methods. When the problem is non-smooth, our method finds an $epsilon$-optimal and $epsilon$-feasible solution by computing at most $O(frac{G^{2/d}}{epsilon^{2-2/d}}ln^3(frac{1}{epsilon}))$ subgradients where $G>0$ and $dgeq 1$ are the growth rate and the exponent, respectively, in the error bound condition. When the problem is smooth, the complexity is improved to $O(frac{G^{1/d}}{epsilon^{1-1/d}}ln^3(frac{1}{epsilon}))$. Our methods do not require knowing $G$, $d$ and any problem dependent parameters.
313 - Long Chen , Hao Luo 2021
We present a unified convergence analysis for first order convex optimization methods using the concept of strong Lyapunov conditions. Combining this with suitable time scaling factors, we are able to handle both convex and strong convex cases, and establish contraction properties of Lyapunov functions for many existing ordinary differential equation models. Then we derive prevailing first order optimization algorithms, such as proximal gradient methods, heavy ball methods (also known as momentum methods), Nesterov accelerated gradient methods, and accelerated proximal gradient methods from numerical discretizations of corresponding dynamical systems. We also apply strong Lyapunov conditions to the discrete level and provide a more systematical analysis framework. Another contribution is a novel second order dynamical system called Hessian-driven Nesterov accelerated gradient flow which can be used to design and analyze accelerated first order methods for smooth and non-smooth convex optimizations.
114 - Zizhuo Wang 2012
In this paper, we propose two algorithms for solving convex optimization problems with linear ascending constraints. When the objective function is separable, we propose a dual method which terminates in a finite number of iterations. In particular, the worst case complexity of our dual method improves over the best-known result for this problem in Padakandla and Sundaresan [SIAM J. Optimization, 20 (2009), pp. 1185-1204]. We then propose a gradient projection method to solve a more general class of problems in which the objective function is not necessarily separable. Numerical experiments show that both our algorithms work well in test problems.
We study the robustness of accelerated first-order algorithms to stochastic uncertainties in gradient evaluation. Specifically, for unconstrained, smooth, strongly convex optimization problems, we examine the mean-squared error in the optimization variable when the iterates are perturbed by additive white noise. This type of uncertainty may arise in situations where an approximation of the gradient is sought through measurements of a real system or in a distributed computation over a network. Even though the underlying dynamics of first-order algorithms for this class of problems are nonlinear, we establish upper bounds on the mean-squared deviation from the optimal solution that are tight up to constant factors. Our analysis quantifies fundamental trade-offs between noise amplification and convergence rates obtained via any acceleration scheme similar to Nesterovs or heavy-ball methods. To gain additional analytical insight, for strongly convex quadratic problems, we explicitly evaluate the steady-state variance of the optimization variable in terms of the eigenvalues of the Hessian of the objective function. We demonstrate that the entire spectrum of the Hessian, rather than just the extreme eigenvalues, influence robustness of noisy algorithms. We specialize this result to the problem of distributed averaging over undirected networks and examine the role of network size and topology on the robustness of noisy accelerated algorithms.
100 - Jikai Jin 2020
In recent years, the success of deep learning has inspired many researchers to study the optimization of general smooth non-convex functions. However, recent works have established pessimistic worst-case complexities for this class functions, which is in stark contrast with their superior performance in real-world applications (e.g. training deep neural networks). On the other hand, it is found that many popular non-convex optimization problems enjoy certain structured properties which bear some similarities to convexity. In this paper, we study the class of textit{quasar-convex functions} to close the gap between theory and practice. We study the convergence of first order methods in a variety of different settings and under different optimality criterions. We prove complexity upper bounds that are similar to standard results established for convex functions and much better that state-of-the-art convergence rates of non-convex functions. Overall, this paper suggests that textit{quasar-convexity} allows efficient optimization procedures, and we are looking forward to seeing more problems that demonstrate similar properties in practice.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا