ترغب بنشر مسار تعليمي؟ اضغط هنا

In this paper, we propose two new solution schemes to solve the stochastic strongly monotone variational inequality problems: the stochastic extra-point solution scheme and the stochastic extra-momentum solution scheme. The first one is a general sch eme based on updating the iterative sequence and an auxiliary extra-point sequence. In the case of deterministic VI model, this approach includes several state-of-the-art first-order methods as its special cases. The second scheme combines two momentum-based directions: the so-called heavy-ball direction and the optimism direction, where only one projection per iteration is required in its updating process. We show that, if the variance of the stochastic oracle is appropriately controlled, then both schemes can be made to achieve optimal iteration complexity of $mathcal{O}left(kappalnleft(frac{1}{epsilon}right)right)$ to reach an $epsilon$-solution for a strongly monotone VI problem with condition number $kappa$. We show that these methods can be readily incorporated in a zeroth-order approach to solve stochastic minimax saddle-point problems, where only noisy and biased samples of the objective can be obtained, with a total sample complexity of $mathcal{O}left(frac{kappa^2}{epsilon}lnleft(frac{1}{epsilon}right)right)$
Many large-scale optimization problems can be expressed as composite optimization models. Accelerated first-order methods such as the fast iterative shrinkage-thresholding algorithm (FISTA) have proven effective for numerous large composite models. I n this paper, we present a new variation of FISTA, to be called C-FISTA, which obtains global linear convergence for a broader class of composite models than many of the latest FISTA variants. We demonstrate the versatility and effectiveness of C-FISTA by showing C-FISTA outperforms current first-order solvers on both group Lasso and group logistic regression models. Furthermore, we utilize Fenchel duality to prove C-FISTA provides global linear convergence for a large class of convex models without the loss of global linear convergence.
In this paper, we propose a unifying framework incorporating several momentum-related search directions for solving strongly monotone variational inequalities. The specific combinations of the search directions in the framework are made to guarantee the optimal iteration complexity bound of $mathcal{O}left(kappaln(1/epsilon)right)$ to reach an $epsilon$-solution, where $kappa$ is the condition number. This framework provides the flexibility for algorithm designers to train -- among different parameter combinations -- the one that best suits the structure of the problem class at hand. The proposed framework includes the following iterative points and directions as its constituents: the extra-gradient, the optimistic gradient descent ascent (OGDA) direction (aka optimism), the heavy-ball direction, and Nesterovs extrapolation points. As a result, all the afore-mentioned methods become the special cases under the general scheme of extra points. We also specialize this approach to strongly convex minimization, and show that a similar extra-point approach achieves the optimal iteration complexity bound of $mathcal{O}(sqrt{kappa}ln(1/epsilon))$ for this class of problems.
This paper expands the work on distributionally robust newsvendor to incorporate moment constraints. The use of Wasserstein distance as the ambiguity measure is preserved. The infinite dimensional primal problem is formulated; problem of moments dual ity is invoked to derive the simpler finite dimensional dual problem. An important research question is: How does distributional ambiguity affect the optimal order quantity and the corresponding profits/costs? To investigate this, some theory is developed and a case study in auto sales is performed. We conclude with some comments on directions for further research.
This paper expands the notion of robust moment problems to incorporate distributional ambiguity using Wasserstein distance as the ambiguity measure. The classical Chebyshev-Cantelli (zeroth partial moment) inequalities, Scarf and Lo (first partial mo ment) bounds, and semideviation (second partial moment) in one dimension are investigated. The infinite dimensional primal problems are formulated and the simpler finite dimensional dual problems are derived. A principal motivating question is how does data-driven distributional ambiguity affect the moment bounds. Towards answering this question, some theory is developed and computational experiments are conducted for specific problem instances in inventory control and portfolio management. Finally some open questions and suggestions for future research are discussed.
In this paper, we propose a cubic regularized Newton (CRN) method for solving convex-concave saddle point problems (SPP). At each iteration, a cubic regularized saddle point subproblem is constructed and solved, which provides a search direction for the iterate. With properly chosen stepsizes, the method is shown to converge to the saddle point with global linear and local superlinear convergence rates, if the saddle point function is gradient Lipschitz and strongly-convex-strongly-concave. In the case that the function is merely convex-concave, we propose a homotopy continuation (or path-following) method. Under a Lipschitz-type error bound condition, we present an iteration complexity bound of $mathcal{O}left(ln left(1/epsilonright)right)$ to reach an $epsilon$-solution through a homotopy continuation approach, and the iteration complexity bound becomes $mathcal{O}left(left(1/epsilonright)^{frac{1-theta}{theta^2}}right)$ under a H{o}lderian-type error bound condition involving a parameter $theta$ ($0<theta<1$).
Nonlinearly constrained nonconvex and nonsmooth optimization models play an increasingly important role in machine learning, statistics and data analytics. In this paper, based on the augmented Lagrangian function we introduce a flexible first-order primal-dual method, to be called nonconvex auxiliary problem principle of augmented Lagrangian (NAPP-AL), for solving a class of nonlinearly constrained nonconvex and nonsmooth optimization problems. We demonstrate that NAPP-AL converges to a stationary solution at the rate of o(1/sqrt{k}), where k is the number of iterations. Moreover, under an additional error bound condition (to be called VP-EB in the paper), we further show that the convergence rate is in fact linear. Finally, we show that the famous Kurdyka- Lojasiewicz property and the metric subregularity imply the afore-mentioned VP-EB condition.
66 - Wenye Li , Shuzhong Zhang 2020
Random projection is often used to project higher-dimensional vectors onto a lower-dimensional space, while approximately preserving their pairwise distances. It has emerged as a powerful tool in various data processing tasks and has attracted consid erable research interest. Partly motivated by the recent discoveries in neuroscience, in this paper we study the problem of random projection using binary matrices with controllable sparsity patterns. Specifically, we proposed two sparse binary projection models that work on general data vectors. Compared with the conventional random projection models with dense projection matrices, our proposed models enjoy significant computational advantages due to their sparsity structure, as well as improved accuracies in empirical evaluations.
This paper expands the notion of robust profit opportunities in financial markets to incorporate distributional uncertainty using Wasserstein distance as the ambiguity measure. Financial markets with risky and risk-free assets are considered. The inf inite dimensional primal problems are formulated, leading to their simpler finite dimensional dual problems. A principal motivating question is how does distributional uncertainty help or hurt the robustness of the profit opportunity. Towards answering this question, some theory is developed and computational experiments are conducted. Finally some open questions and suggestions for future research are discussed.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا