ترغب بنشر مسار تعليمي؟ اضغط هنا

Error bounds for monomial convexification in polynomial optimization

103   0   0.0 ( 0 )
 نشر من قبل Akshay Gupte
 تاريخ النشر 2017
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Convex hulls of monomials have been widely studied in the literature, and monomial convexifications are implemented in global optimization software for relaxing polynomials. However, there has been no study of the error in the global optimum from such approaches. We give bounds on the worst-case error for convexifying a monomial over subsets of $[0,1]^n$. This implies additive error bounds for relaxing a polynomial optimization problem by convexifying each monomial separately. Our main error bounds depend primarily on the degree of the monomial, making them easy to compute. Since monomial convexification studies depend on the bounds on the associated variables, in the second part, we conduct an error analysis for a multilinear monomial over two different types of box constraints. As part of this analysis, we also derive the convex hull of a multilinear monomial over $[-1,1]^n$.



قيم البحث

اقرأ أيضاً

Random projection techniques based on Johnson-Lindenstrauss lemma are used for randomly aggregating the constraints or variables of optimization problems while approximately preserving their optimal values, that leads to smaller-scale optimization pr oblems. DAmbrosio et al. have applied random projection to a quadratic optimization problem so as to decrease the number of decision variables. Although the problem size becomes smaller, the projected problem will also almost surely be non-convex if the original problem is non-convex, and hence will be hard to solve. In this paper, by focusing on the fact that the level of the non-convexity of a non-convex quadratic optimization problem can be alleviated by random projection, we find an approximate global optimal value of the problem by attributing it to a convex problem with smaller size. To the best of our knowledge, our paper is the first to use random projection for convexification of non-convex optimization problems. We evaluate the approximation error between optimum values of a non-convex optimization problem and its convexified randomly projected problem.
This paper studies stochastic optimization problems with polynomials. We propose an optimization model with sample averages and perturbations. The Lasserre type Moment-SOS relaxations are used to solve the sample average optimization. Properties of t he optimization and its relaxations are studied. Numerical experiments are presented.
We consider optimization algorithms that successively minimize simple Taylor-like models of the objective function. Methods of Gauss-Newton type for minimizing the composition of a convex function and a smooth map are common examples. Our main result is an explicit relationship between the step-size of any such algorithm and the slope of the function at a nearby point. Consequently, we (1) show that the step-sizes can be reliably used to terminate the algorithm, (2) prove that as long as the step-sizes tend to zero, every limit point of the iterates is stationary, and (3) show that conditions, akin to classical quadratic growth, imply that the step-sizes linearly bound the distance of the iterates to the solution set. The latter so-called error bound property is typically used to establish linear (or faster) convergence guarantees. Analogous results hold when the step-size is replaced by the square root of the decrease in the models value. We complete the paper with extensions to when the models are minimized only inexactly.
106 - Jiawang Nie , Zi Yang 2021
The multi-objective optimization is to optimize several objective functions over a common feasible set. Since the objectives usually do not share a common optimizer, people often consider (weakly) Pareto points. This paper studies multi-objective opt imization problems that are given by polynomial functions. First, we study the convex geometry for (weakly) Pareto values and give a convex representation for them. Linear scalarization problems (LSPs) and Chebyshev scalarization problems (CSPs) are typical approaches for getting (weakly) Pareto points. For LSPs, we show how to use tight relaxations to solve them, how to detect existence or nonexistence of proper weights. For CSPs, we show how to solve them by moment relaxations. Moreover, we show how to check if a given point is a (weakly) Pareto point or not and how to detect existence or nonexistence of (weakly) Pareto points. We also study how to detect unboundedness of polynomial optimization, which is used to detect nonexistence of proper weights or (weakly) Pareto points.
317 - Nan Li , Lihong Zhi 2012
In this paper, we generalize the algorithm described by Rump and Graillat, as well as our previous work on certifying breadth-one singular solutions of polynomial systems, to compute verified and narrow error bounds such that a slightly perturbed sys tem is guaranteed to possess an isolated singular solution within the computed bounds. Our new verification method is based on deflation techniques using smoothing parameters. We demonstrate the performance of the algorithm for systems with singular solutions of multiplicity up to hundreds.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا