ترغب بنشر مسار تعليمي؟ اضغط هنا

A Proximal-Point Algorithm with Variable Sample-sizes (PPAWSS) for Monotone Stochastic Variational Inequality Problems

177   0   0.0 ( 0 )
 نشر من قبل Afrooz Jalilzadeh
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider a stochastic variational inequality (SVI) problem with a continuous and monotone mapping over a closed and convex set. In strongly monotone regimes, we present a variable sample-size averaging scheme (VS-Ave) that achieves a linear rate with an optimal oracle complexity. In addition, the iteration complexity is shown to display a muted dependence on the condition number compared with standard variance-reduced projection schemes. To contend with merely monotone maps, we develop amongst the first proximal-point algorithms with variable sample-sizes (PPAWSS), where increasingly accurate solutions of strongly monotone SVIs are obtained via (VS-Ave) at every step. This allows for achieving a sublinear convergence rate that matches that obtained for deterministic monotone VIs. Preliminary numerical evidence suggests that the schemes compares well with competing schemes.



قيم البحث

اقرأ أيضاً

132 - Jinlong Lei , Peng Yi , Jie Chen 2021
The paper considers distributed stochastic optimization over randomly switching networks, where agents collaboratively minimize the average of all agents local expectation-valued convex cost functions. Due to the stochasticity in gradient observation s, distributedness of local functions, and randomness of communication topologies, distributed algorithms with a convergence guarantee under fixed step-sizes have not been achieved yet. This work incorporates variance reduction scheme into the distributed stochastic gradient tracking algorithm, where local gradients are estimated by averaging across a variable number of sampled gradients. With an identically and independently distributed (i.i.d.) random network, we show that all agents iterates converge almost surely to the same optimal solution under fixed step-sizes. When the global cost function is strongly convex and the sample size increases at a geometric rate, we prove that the iterates geometrically converge to the unique optimal solution, and establish the iteration, oracle, and communication complexity. The algorithm performance including rate and complexity analysis are further investigated with constant step-sizes and a polynomially increasing sample size. Finally, the empirical algorithm performance are illustrated with numerical examples.
Classical extragradient schemes and their stochastic counterpart represent a cornerstone for resolving monotone variational inequality problems. Yet, such schemes have a per-iteration complexity of two projections onto a convex set and require two ev aluations of the map, the former of which could be relatively expensive if $X$ is a complicated set. We consider two related avenues where the per-iteration complexity is significantly reduced: (i) A stochastic projected reflected gradient method requiring a single evaluation of the map and a single projection; and (ii) A stochastic subgradient extragradient method that requires two evaluations of the map, a single projection onto $X$, and a significantly cheaper projection (onto a halfspace) computable in closed form. Under a variance-reduced framework reliant on a sample-average of the map based on an increasing batch-size, we prove almost sure (a.s.) convergence of the iterates to a random point in the solution set for both schemes. Additionally, both schemes display a non-asymptotic rate of $mathcal{O}(1/K)$ where $K$ denotes the number of iterations; notably, both rates match those obtained in deterministic regimes. To address feasibility sets given by the intersection of a large number of convex constraints, we adapt both of the aforementioned schemes to a random projection framework. We then show that the random projection analogs of both schemes also display a.s. convergence under a weak-sharpness requirement; furthermore, without imposing the weak-sharpness requirement, both schemes are characterized by a provable rate of $mathcal{O}(1/sqrt{K})$ in terms of the gap function of the projection of the averaged sequence onto $X$ as well as the infeasibility of this sequence. Preliminary numerics support theoretical findings and the schemes outperform standard extragradient schemes in terms of the per-iteration complexity.
In this paper, we focus on solving a class of constrained non-convex non-concave saddle point problems in a decentralized manner by a group of nodes in a network. Specifically, we assume that each node has access to a summand of a global objective fu nction and nodes are allowed to exchange information only with their neighboring nodes. We propose a decentralized variant of the proximal point method for solving this problem. We show that when the objective function is $rho$-weakly convex-weakly concave the iterates converge to approximate stationarity with a rate of $mathcal{O}(1/sqrt{T})$ where the approximation error depends linearly on $sqrt{rho}$. We further show that when the objective function satisfies the Minty VI condition (which generalizes the convex-concave case) we obtain convergence to stationarity with a rate of $mathcal{O}(1/sqrt{T})$. To the best of our knowledge, our proposed method is the first decentralized algorithm with theoretical guarantees for solving a non-convex non-concave decentralized saddle point problem. Our numerical results for training a general adversarial network (GAN) in a decentralized manner match our theoretical guarantees.
In this paper, we introduce a proximal-proximal majorization-minimization (PPMM) algorithm for nonconvex tuning-free robust regression problems. The basic idea is to apply the proximal majorization-minimization algorithm to solve the nonconvex proble m with the inner subproblems solved by a sparse semismooth Newton (SSN) method based proximal point algorithm (PPA). We must emphasize that the main difficulty in the design of the algorithm lies in how to overcome the singular difficulty of the inner subproblem. Furthermore, we also prove that the PPMM algorithm converges to a d-stationary point. Due to the Kurdyka-Lojasiewicz (KL) property of the problem, we present the convergence rate of the PPMM algorithm. Numerical experiments demonstrate that our proposed algorithm outperforms the existing state-of-the-art algorithms.
We introduce a class of specially structured linear programming (LP) problems, which has favorable modeling capability for important application problems in different areas such as optimal transport, discrete tomography and economics. To solve these generally large-scale LP problems efficiently, we design an implementable inexact entropic proximal point algorithm (iEPPA) combined with an easy-to-implement dual block coordinate descent method as a subsolver. Unlike existing entropy-type proximal point algorithms, our iEPPA employs a more practically checkable stopping condition for solving the associated subproblems while achieving provable convergence. Moreover, when solving the capacity constrained multi-marginal optimal transport (CMOT) problem (a special case of our LP problem), our iEPPA is able to bypass the underlying numerical instability issues that often appear in the popular entropic regularization approach, since our algorithm does not require the proximal parameter to be very small in order to obtain an accurate approximate solution. Numerous numerical experiments show that our iEPPA is highly efficient and robust for solving large-scale CMOT problems, in comparison to the (stabilized) Dykstras algorithm and the commercial solver Gurobi. Moreover, the experiments on discrete tomography also highlight the potential modeling power of our model.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا