ترغب بنشر مسار تعليمي؟ اضغط هنا

Convergent relaxations of polynomial optimization problems with non-commuting variables

63   0   0.0 ( 0 )
 نشر من قبل Stefano Pironio
 تاريخ النشر 2009
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider optimization problems with polynomial inequality constraints in non-commuting variables. These non-commuting variables are viewed as bounded operators on a Hilbert space whose dimension is not fixed and the associated polynomial inequalities as semidefinite positivity constraints. Such problems arise naturally in quantum theory and quantum information science. To solve them, we introduce a hierarchy of semidefinite programming relaxations which generates a monotone sequence of lower bounds that converges to the optimal solution. We also introduce a criterion to detect whether the global optimum is reached at a given relaxation step and show how to extract a global optimizer from the solution of the corresponding semidefinite programming problem.

قيم البحث

اقرأ أيضاً

In this paper, we are interested in the existence of Pareto solutions to vector polynomial optimization problems over a basic closed semi-algebraic set. By invoking some powerful tools from real semi-algebraic geometry, we first introduce the concept called {it tangency varieties}; then we establish connections of the Palais--Smale condition, Cerami condition, {it M}-tameness, and properness related to the considered problem, in which the condition of regularity at infinity plays an essential role in deriving these connections. According to the obtained connections, we provide some sufficient conditions for existence of Pareto solutions to the problem in consideration, and we also give some examples to illustrate our main findings.
117 - Jiawang Nie , Xindong Tang 2021
This paper studies convex Generalized Nash Equilibrium Problems (GNEPs) that are given by polynomials. We use rational and parametric expressions for Lagrange multipliers to formulate efficient polynomial optimization for computing Generalized Nash E quilibria (GNEs). The Moment-SOS hierarchy of semidefinite relaxations are used to solve the polynomial optimization. Under some general assumptions, we prove the method can find a GNE if there exists one, or detect nonexistence of GNEs. Numerical experiments are presented to show the efficiency of the method.
Signomial programs (SPs) are optimization problems specified in terms of signomials, which are weighted sums of exponentials composed with linear functionals of a decision variable. SPs are non-convex optimization problems in general, and families of NP-hard problems can be reduced to SPs. In this paper we describe a hierarchy of convex relaxations to obtain successively tighter lower bounds of the optimal value of SPs. This sequence of lower bounds is computed by solving increasingly larger-sized relative entropy optimization problems, which are convex programs specified in terms of linear and relative entropy functions. Our approach relies crucially on the observation that the relative entropy function -- by virtue of its joint convexity with respect to both arguments -- provides a convex parametrization of certain sets of globally nonnegative signomials with efficiently computable nonnegativity certificates via the arithmetic-geometric-mean inequality. By appealing to representation theorems from real algebraic geometry, we show that our sequences of lower bounds converge to the global optima for broad classes of SPs. Finally, we also demonstrate the effectiveness of our methods via numerical experiments.
Many large-scale optimization problems can be expressed as composite optimization models. Accelerated first-order methods such as the fast iterative shrinkage-thresholding algorithm (FISTA) have proven effective for numerous large composite models. I n this paper, we present a new variation of FISTA, to be called C-FISTA, which obtains global linear convergence for a broader class of composite models than many of the latest FISTA variants. We demonstrate the versatility and effectiveness of C-FISTA by showing C-FISTA outperforms current first-order solvers on both group Lasso and group logistic regression models. Furthermore, we utilize Fenchel duality to prove C-FISTA provides global linear convergence for a large class of convex models without the loss of global linear convergence.
We consider minimizing a sum of non-smooth objective functions with set constraints in a distributed manner. As to this problem, we propose a distributed algorithm with an exponential convergence rate for the first time. By the exact penalty method, we reformulate the problem equivalently as a standard distributed one without consensus constraints. Then we design a distributed projected subgradient algorithm with the help of differential inclusions. Furthermore, we show that the algorithm converges to the optimal solution exponentially for strongly convex objective functions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا