ﻻ يوجد ملخص باللغة العربية
Gradient-free optimization methods, such as surrogate based optimization (SBO) methods, and genetic (GAs), or evolutionary (EAs) algorithms have gained popularity in the field of constrained optimization of expensive black-box functions. However, constraint-handling methods, by both classes of solvers, do not usually guarantee strictly feasible candidates during optimization. This can become an issue in applied engineering problems where design variables must remain feasible for simulations to not fail. We propose a constraint-handling method for computationally inexpensive constraint functions which guarantees strictly feasible candidates when using a surrogate-based optimizer. We compare our method to other SBO, GA/EA and gradient-based algorithms on two (relatively simple and relatively hard) analytical test functions, and an applied fully-resolved Computational Fluid Dynamics (CFD) problem concerned with optimization of an undulatory swimming of a fish-like body, and show that the proposed algorithm shows favorable results while guaranteeing feasible candidates.
Simulation models are widely used in practice to facilitate decision-making in a complex, dynamic and stochastic environment. But they are computationally expensive to execute and optimize, due to lack of analytical tractability. Simulation optimizat
We consider estimation and control of the cylinder wake at low Reynolds numbers. A particular focus is on the development of efficient numerical algorithms to design optimal linear feedback controllers when there are many inputs (disturbances applied
We present a data-driven model predictive control scheme for chance-constrained Markovian switching systems with unknown switching probabilities. Using samples of the underlying Markov chain, ambiguity sets of transition probabilities are estimated w
Study about theory and algorithms for constrained optimization usually assumes that the feasible region of the optimization problem is nonempty. However, there are many important practical optimization problems whose feasible regions are not known to
Some popular functions used to test global optimization algorithms have multiple local optima, all with the same value, making them all global optima. It is easy to make them more challenging by fortifying them via adding a localized bump at the loca