ترغب بنشر مسار تعليمي؟ اضغط هنا

Convexification with bounded gap for randomly projected quadratic optimization

90   0   0.0 ( 0 )
 نشر من قبل Terunari Fuji
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Random projection techniques based on Johnson-Lindenstrauss lemma are used for randomly aggregating the constraints or variables of optimization problems while approximately preserving their optimal values, that leads to smaller-scale optimization problems. DAmbrosio et al. have applied random projection to a quadratic optimization problem so as to decrease the number of decision variables. Although the problem size becomes smaller, the projected problem will also almost surely be non-convex if the original problem is non-convex, and hence will be hard to solve. In this paper, by focusing on the fact that the level of the non-convexity of a non-convex quadratic optimization problem can be alleviated by random projection, we find an approximate global optimal value of the problem by attributing it to a convex problem with smaller size. To the best of our knowledge, our paper is the first to use random projection for convexification of non-convex optimization problems. We evaluate the approximation error between optimum values of a non-convex optimization problem and its convexified randomly projected problem.



قيم البحث

اقرأ أيضاً

102 - Warren Adams , Akshay Gupte , 2017
Convex hulls of monomials have been widely studied in the literature, and monomial convexifications are implemented in global optimization software for relaxing polynomials. However, there has been no study of the error in the global optimum from suc h approaches. We give bounds on the worst-case error for convexifying a monomial over subsets of $[0,1]^n$. This implies additive error bounds for relaxing a polynomial optimization problem by convexifying each monomial separately. Our main error bounds depend primarily on the degree of the monomial, making them easy to compute. Since monomial convexification studies depend on the bounds on the associated variables, in the second part, we conduct an error analysis for a multilinear monomial over two different types of box constraints. As part of this analysis, we also derive the convex hull of a multilinear monomial over $[-1,1]^n$.
Conic optimization is the minimization of a differentiable convex objective function subject to conic constraints. We propose a novel primal-dual first-order method for conic optimization, named proportional-integral projected gradient method (PIPG). PIPG ensures that both the primal-dual gap and the constraint violation converge to zero at the rate of (O(1/k)), where (k) is the number of iterations. If the objective function is strongly convex, PIPG improves the convergence rate of the primal-dual gap to (O(1/k^2)). Further, unlike any existing first-order methods, PIPG also improves the convergence rate of the constraint violation to (O(1/k^3)). We demonstrate the application of PIPG in constrained optimal control problems.
We consider optimization problems on Riemannian manifolds with equality and inequality constraints, which we call Riemannian nonlinear optimization (RNLO) problems. Although they have numerous applications, the existing studies on them are limited es pecially in terms of algorithms. In this paper, we propose Riemannian sequential quadratic optimization (RSQO) that uses a line-search technique with an ell_1 penalty function as an extension of the standard SQO algorithm for constrained nonlinear optimization problems in Euclidean spaces to Riemannian manifolds. We prove its global convergence to a Karush-Kuhn-Tucker point of the RNLO problem by means of parallel transport and the exponential mapping. Furthermore, we establish its local quadratic convergence by analyzing the relationship between sequences generated by RSQO and the Riemannian Newton method. Ours is the first algorithm that has both global and local convergence properties for constrained nonlinear optimization on Riemannian manifolds. Empirical results show that RSQO finds solutions more stably and with higher accuracy compared with the existing Riemannian penalty and augmented Lagrangian methods.
62 - Amit Verma , Mark Lewis 2021
Quadratic Unconstrained Binary Optimization models are useful for solving a diverse range of optimization problems. Constraints can be added by incorporating quadratic penalty terms into the objective, often with the introduction of slack variables n eeded for conversion of inequalities. This transformation can lead to a significant increase in the size and density of the problem. Herein, we propose an efficient approach for recasting inequality constraints that reduces the number of linear and quadratic variables. Experimental results illustrate the efficacy.
A sequential quadratic optimization algorithm is proposed for solving smooth nonlinear equality constrained optimization problems in which the objective function is defined by an expectation of a stochastic function. The algorithmic structure of the proposed method is based on a step decomposition strategy that is known in the literature to be widely effective in practice, wherein each search direction is computed as the sum of a normal step (toward linearized feasibility) and a tangential step (toward objective decrease in the null space of the constraint Jacobian). However, the proposed method is unique from others in the literature in that it both allows the use of stochastic objective gradient estimates and possesses convergence guarantees even in the setting in which the constraint Jacobians may be rank deficient. The results of numerical experiments demonstrate that the algorithm offers superior performance when compared to popular alternatives.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا