ترغب بنشر مسار تعليمي؟ اضغط هنا

Sequential Convex Restriction and its Applications in Robust Optimization

320   0   0.0 ( 0 )
 نشر من قبل Dongchan Lee
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper presents a convex sufficient condition for solving a system of nonlinear equations under parametric changes and proposes a sequential convex optimization method for solving robust optimization problems with nonlinear equality constraints. By bounding the nonlinearity with concave envelopes and using Brouwers fixed point theorem, the sufficient condition is expressed in terms of closed-form convex inequality constraints. We extend the result to provide a convex sufficient condition for feasibility under bounded uncertainty. Using these conditions, a non-convex optimization problem can be solved as a sequence of convex optimization problems, with feasibility and robustness guarantees. We present a detailed analysis of the performance and complexity of the proposed condition. The examples in polynomial optimization and nonlinear network are provided to illustrate the proposed method.



قيم البحث

اقرأ أيضاً

183 - N.H. Chieu , J.W. Feng , W. Gao 2017
In this paper, we introduce a new class of nonsmooth convex functions called SOS-convex semialgebraic functions extending the recently proposed notion of SOS-convex polynomials. This class of nonsmooth convex functions covers many common nonsmooth fu nctions arising in the applications such as the Euclidean norm, the maximum eigenvalue function and the least squares functions with $ell_1$-regularization or elastic net regularization used in statistics and compressed sensing. We show that, under commonly used strict feasibility conditions, the optimal value and an optimal solution of SOS-convex semi-algebraic programs can be found by solving a single semi-definite programming problem (SDP). We achieve the results by using tools from semi-algebraic geometry, convex-concave minimax theorem and a recently established Jensen inequality type result for SOS-convex polynomials. As an application, we outline how the derived results can be applied to show that robust SOS-convex optimization problems under restricted spectrahedron data uncertainty enjoy exact SDP relaxations. This extends the existing exact SDP relaxation result for restricted ellipsoidal data uncertainty and answers the open questions left in [Optimization Letters 9, 1-18(2015)] on how to recover a robust solution from the semi-definite programming relaxation in this broader setting.
We present an algorithm for robust model predictive control with consideration of uncertainty and safety constraints. Our framework considers a nonlinear dynamical system subject to disturbances from an unknown but bounded uncertainty set. By viewing the system as a fixed point of an operator acting over trajectories, we propose a convex condition on control actions that guarantee safety against the uncertainty set. The proposed condition guarantees that all realizations of the state trajectories satisfy safety constraints. Our algorithm solves a sequence of convex quadratic constrained optimization problems of size n*N, where n is the number of states, and N is the prediction horizon in the model predictive control problem. Compared to existing methods, our approach solves convex problems while guaranteeing that all realizations of uncertainty set do not violate safety constraints. Moreover, we consider the implicit time-discretization of system dynamics to increase the prediction horizon and enhance computational accuracy. Numerical simulations for vehicle navigation demonstrate the effectiveness of our approach.
The usual approach to developing and analyzing first-order methods for smooth convex optimization assumes that the gradient of the objective function is uniformly smooth with some Lipschitz constant $L$. However, in many settings the differentiable c onvex function $f(cdot)$ is not uniformly smooth -- for example in $D$-optimal design where $f(x):=-ln det(HXH^T)$, or even the univariate setting with $f(x) := -ln(x) + x^2$. Herein we develop a notion of relative smoothness and relative strong convexity that is determined relative to a user-specified reference function $h(cdot)$ (that should be computationally tractable for algorithms), and we show that many differentiable convex functions are relatively smooth with respect to a correspondingly fairly-simple reference function $h(cdot)$. We extend two standard algorithms -- the primal gradient scheme and the dual averaging scheme -- to our new setting, with associated computational guarantees. We apply our new approach to develop a new first-order method for the $D$-optimal design problem, with associated computational complexity analysis. Some of our results have a certain overlap with the recent work cite{bbt}.
We study the problem of policy synthesis for uncertain partially observable Markov decision processes (uPOMDPs). The transition probability function of uPOMDPs is only known to belong to a so-called uncertainty set, for instance in the form of probab ility intervals. Such a model arises when, for example, an agent operates under information limitation due to imperfect knowledge about the accuracy of its sensors. The goal is to compute a policy for the agent that is robust against all possible probability distributions within the uncertainty set. In particular, we are interested in a policy that robustly ensures the satisfaction of temporal logic and expected reward specifications. We state the underlying optimization problem as a semi-infinite quadratically-constrained quadratic program (QCQP), which has finitely many variables and infinitely many constraints. Since QCQPs are non-convex in general and practically infeasible to solve, we resort to the so-called convex-concave procedure to convexify the QCQP. Even though convex, the resulting optimization problem still has infinitely many constraints and is NP-hard. For uncertainty sets that form convex polytopes, we provide a transformation of the problem to a convex QCQP with finitely many constraints. We demonstrate the feasibility of our approach by means of several case studies that highlight typical bottlenecks for our problem. In particular, we show that we are able to solve benchmarks with hundreds of thousands of states, hundreds of different observations, and we investigate the effect of different levels of uncertainty in the models.
The min-max optimization problem, also known as the saddle point problem, is a classical optimization problem which is also studied in the context of zero-sum games. Given a class of objective functions, the goal is to find a value for the argument w hich leads to a small objective value even for the worst case function in the given class. Min-max optimization problems have recently become very popular in a wide range of signal and data processing applications such as fair beamforming, training generative adversarial networks (GANs), and robust machine learning, to just name a few. The overarching goal of this article is to provide a survey of recent advances for an important subclass of min-max problem, where the minimization and maximization problems can be non-convex and/or non-concave. In particular, we will first present a number of applications to showcase the importance of such min-max problems; then we discuss key theoretical challenges, and provide a selective review of some exciting recent theoretical and algorithmic advances in tackling non-convex min-max problems. Finally, we will point out open questions and future research directions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا