ﻻ يوجد ملخص باللغة العربية
Derivatives are an important tool for single-objective optimization. In fact, it is commonly accepted that derivative-based methods present a better performance than derivative-free optimization approaches. In this work, we will show that the same does not apply to multiobjective derivative-based optimization, when the goal is to compute an approximation to the complete Pareto front of a given problem. The competitiveness of Direct MultiSearch (DMS), a robust and efficient derivative-free optimization algorithm, will be stated for derivative-based multiobjective optimization problems. We will then assess the potential enrichment of adding first-order information to the DMS framework. Derivatives will be used to prune the positive spanning sets considered at the poll step of the algorithm, highlighting the role that ascent directions, that conform to the geometry of the nearby feasible region, can have. Both variants of DMS show to be competitive against a state-of-art derivative-based algorithm. Moreover, for reasonable small budgets of function evaluations, the new variant is not only competitive with the derivative-based solver but also with the original implementation of DMS.
In this article we develop a gradient-based algorithm for the solution of multiobjective optimization problems with uncertainties. To this end, an additional condition is derived for the descent direction in order to account for inaccuracies in the g
Inverse multiobjective optimization provides a general framework for the unsupervised learning task of inferring parameters of a multiobjective decision making problem (DMP), based on a set of observed decisions from the human expert. However, the pe
In this paper, we propose some new proximal quasi-Newton methods with line search or without line search for a special class of nonsmooth multiobjective optimization problems, where each objective function is the sum of a twice continuously different
We study the robustness of accelerated first-order algorithms to stochastic uncertainties in gradient evaluation. Specifically, for unconstrained, smooth, strongly convex optimization problems, we examine the mean-squared error in the optimization va
In recent years, the success of deep learning has inspired many researchers to study the optimization of general smooth non-convex functions. However, recent works have established pessimistic worst-case complexities for this class functions, which i