ﻻ يوجد ملخص باللغة العربية
Random projection techniques based on Johnson-Lindenstrauss lemma are used for randomly aggregating the constraints or variables of optimization problems while approximately preserving their optimal values, that leads to smaller-scale optimization problems. DAmbrosio et al. have applied random projection to a quadratic optimization problem so as to decrease the number of decision variables. Although the problem size becomes smaller, the projected problem will also almost surely be non-convex if the original problem is non-convex, and hence will be hard to solve. In this paper, by focusing on the fact that the level of the non-convexity of a non-convex quadratic optimization problem can be alleviated by random projection, we find an approximate global optimal value of the problem by attributing it to a convex problem with smaller size. To the best of our knowledge, our paper is the first to use random projection for convexification of non-convex optimization problems. We evaluate the approximation error between optimum values of a non-convex optimization problem and its convexified randomly projected problem.
Convex hulls of monomials have been widely studied in the literature, and monomial convexifications are implemented in global optimization software for relaxing polynomials. However, there has been no study of the error in the global optimum from suc
Conic optimization is the minimization of a differentiable convex objective function subject to conic constraints. We propose a novel primal-dual first-order method for conic optimization, named proportional-integral projected gradient method (PIPG).
We consider optimization problems on Riemannian manifolds with equality and inequality constraints, which we call Riemannian nonlinear optimization (RNLO) problems. Although they have numerous applications, the existing studies on them are limited es
Quadratic Unconstrained Binary Optimization models are useful for solving a diverse range of optimization problems. Constraints can be added by incorporating quadratic penalty terms into the objective, often with the introduction of slack variables n
A sequential quadratic optimization algorithm is proposed for solving smooth nonlinear equality constrained optimization problems in which the objective function is defined by an expectation of a stochastic function. The algorithmic structure of the