ترغب بنشر مسار تعليمي؟ اضغط هنا

A Constraint Handling Approach with Guaranteed Feasibility for Surrogate Based Optimization

112   0   0.0 ( 0 )
 نشر من قبل Ahmed Abouhussein
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Gradient-free optimization methods, such as surrogate based optimization (SBO) methods, and genetic (GAs), or evolutionary (EAs) algorithms have gained popularity in the field of constrained optimization of expensive black-box functions. However, constraint-handling methods, by both classes of solvers, do not usually guarantee strictly feasible candidates during optimization. This can become an issue in applied engineering problems where design variables must remain feasible for simulations to not fail. We propose a constraint-handling method for computationally inexpensive constraint functions which guarantees strictly feasible candidates when using a surrogate-based optimizer. We compare our method to other SBO, GA/EA and gradient-based algorithms on two (relatively simple and relatively hard) analytical test functions, and an applied fully-resolved Computational Fluid Dynamics (CFD) problem concerned with optimization of an undulatory swimming of a fish-like body, and show that the proposed algorithm shows favorable results while guaranteeing feasible candidates.



قيم البحث

اقرأ أيضاً

Simulation models are widely used in practice to facilitate decision-making in a complex, dynamic and stochastic environment. But they are computationally expensive to execute and optimize, due to lack of analytical tractability. Simulation optimizat ion is concerned with developing efficient sampling schemes -- subject to a computational budget -- to solve such optimization problems. To mitigate the computational burden, surrogates are often constructed using simulation outputs to approximate the response surface of the simulation model. In this tutorial, we provide an up-to-date overview of surrogate-based methods for simulation optimization with continuous decision variables. Typical surrogates, including linear basis function models and Gaussian processes, are introduced. Surrogates can be used either as a local approximation or a global approximation. Depending on the choice, one may develop algorithms that converge to either a local optimum or a global optimum. Representative examples are presented for each category. Recent advances in large-scale computation for Gaussian processes are also discussed.
We consider estimation and control of the cylinder wake at low Reynolds numbers. A particular focus is on the development of efficient numerical algorithms to design optimal linear feedback controllers when there are many inputs (disturbances applied everywhere) and many outputs (perturbations measured everywhere). We propose a resolvent-based iterative algorithm to perform i) optimal estimation of the flow using a limited number of sensors; and ii) optimal control of the flow when the entire flow is known but only a limited number of actuators are available for control. The method uses resolvent analysis to take advantage of the low-rank characteristics of the cylinder wake and solutions are obtained without any model-order reduction. Optimal feedback controllers are also obtained by combining the solutions of the estimation and control problems. We show that the performance of the estimators and controllers converges to the true global optima, indicating that the important physical mechanisms for estimation and control are of low rank.
We present a data-driven model predictive control scheme for chance-constrained Markovian switching systems with unknown switching probabilities. Using samples of the underlying Markov chain, ambiguity sets of transition probabilities are estimated w hich include the true conditional probability distributions with high probability. These sets are updated online and used to formulate a time-varying, risk-averse optimal control problem. We prove recursive feasibility of the resulting MPC scheme and show that the original chance constraints remain satisfied at every time step. Furthermore, we show that under sufficient decrease of the confidence levels, the resulting MPC scheme renders the closed-loop system mean-square stable with respect to the true-but-unknown distributions, while remaining less conservative than a fully robust approach.
142 - Yu-Hong Dai , Liwei Zhang 2020
Study about theory and algorithms for constrained optimization usually assumes that the feasible region of the optimization problem is nonempty. However, there are many important practical optimization problems whose feasible regions are not known to be nonempty or not, and optimizers of the objective function with the least constraint violation prefer to be found. A natural way for dealing with these problems is to extend the constrained optimization problem as the one optimizing the objective function over the set of points with the least constraint violation. Firstly, the minimization problem with least constraint violation is proved to be an Lipschitz equality constrained optimization problem when the original problem is a convex optimization problem with possible inconsistent conic constraints, and it can be reformulated as an MPEC problem. Secondly, for nonlinear programming problems with possible inconsistent constraints, various types of stationary points are presented for the MPCC problem which is equivalent to the minimization problem with least constraint violation, and an elegant necessary optimality condition, named as L-stationary condition, is established from the classical optimality theory of Lipschitz continuous optimization. Finally, the smoothing Fischer-Burmeister function method for nonlinear programming case is constructed for solving the problem minimizing the objective function with the least constraint violation. It is demonstrated that, when the positive smoothing parameter approaches to zero, any point in the outer limit of the KKT-point mapping is an L-stationary point of the equivalent MPCC problem.
Some popular functions used to test global optimization algorithms have multiple local optima, all with the same value, making them all global optima. It is easy to make them more challenging by fortifying them via adding a localized bump at the loca tion of one of the optima. In previous work the authors illustrated this for the Branin-Hoo function and the popular differential evolution algorithm, showing that the fortified Branin-Hoo required an order of magnitude more function evaluations. This paper examines the effect of fortifying the Branin-Hoo function on surrogate-based optimization, which usually proceeds by adaptive sampling. Two algorithms are considered. The EGO algorithm, which is based on a Gaussian process (GP) and an algorithm based on radial basis functions (RBF). EGO is found to be more frugal in terms of the number of required function evaluations required to identify the correct basin, but it is expensive to run on a desktop, limiting the number of times the runs could be repeated to establish sound statistics on the number of required function evaluations. The RBF algorithm was cheaper to run, providing more sound statistics on performance. A four-dimensional version of the Branin-Hoo function was introduced in order to assess the effect of dimensionality. It was found that the difference between the ordinary function and the fortified one was much more pronounced for the four-dimensional function compared to the two dimensional one.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا