Convergence Certificate for Stochastic Derivative-Free Trust-Region Methods based on Gaussian Processes


Abstract in English

In many machine learning applications, one wants to learn the unknown objective and constraint functions of an optimization problem from available data and then apply some technique to attain a local optimizer of the learned model. This work considers Gaussian processes as global surrogate models and utilizes them in conjunction with derivative-free trust-region methods. It is well known that derivative-free trust-region methods converge globally---provided the surrogate model is probabilistically fully linear. We prove that glspl{gp} are indeed probabilistically fully linear, thus resulting in fast (compared to linear or quadratic local surrogate models) and global convergence. We draw upon the optimization of a chemical reactor to demonstrate the efficiency of gls{gp}-based trust-region methods.

Download