ترغب بنشر مسار تعليمي؟ اضغط هنا

Gradient-Based Multiobjective Optimization with Uncertainties

71   0   0.0 ( 0 )
 نشر من قبل Sebastian Peitz
 تاريخ النشر 2016
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this article we develop a gradient-based algorithm for the solution of multiobjective optimization problems with uncertainties. To this end, an additional condition is derived for the descent direction in order to account for inaccuracies in the gradients and then incorporated in a subdivison algorithm for the computation of global solutions to multiobjective optimization problems. Convergence to a superset of the Pareto set is proved and an upper bound for the maximal distance to the set of substationary points is given. Besides the applicability to problems with uncertainties, the algorithm is developed with the intention to use it in combination with model order reduction techniques in order to efficiently solve PDE-constrained multiobjective optimization problems.

قيم البحث

اقرأ أيضاً

238 - Chaosheng Dong , Bo Zeng 2020
Inverse multiobjective optimization provides a general framework for the unsupervised learning task of inferring parameters of a multiobjective decision making problem (DMP), based on a set of observed decisions from the human expert. However, the pe rformance of this framework relies critically on the availability of an accurate DMP, sufficient decisions of high quality, and a parameter space that contains enough information about the DMP. To hedge against the uncertainties in the hypothetical DMP, the data, and the parameter space, we investigate in this paper the distributionally robust approach for inverse multiobjective optimization. Specifically, we leverage the Wasserstein metric to construct a ball centered at the empirical distribution of these decisions. We then formulate a Wasserstein distributionally robust inverse multiobjective optimization problem (WRO-IMOP) that minimizes a worst-case expected loss function, where the worst case is taken over all distributions in the Wasserstein ball. We show that the excess risk of the WRO-IMOP estimator has a sub-linear convergence rate. Furthermore, we propose the semi-infinite reformulations of the WRO-IMOP and develop a cutting-plane algorithm that converges to an approximate solution in finite iterations. Finally, we demonstrate the effectiveness of our method on both a synthetic multiobjective quadratic program and a real world portfolio optimization problem.
We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization, which is of great importance in many scientific areas such as image and signal processing, medical imaging, compre ssed sensing, and machine learning (e.g., for the training of neural networks). Sparsity is an important feature to ensure robustness against noisy data, but also to find models that are interpretable and easy to analyze due to the small number of relevant terms. It is common practice to enforce sparsity by adding the $ell_1$-norm as a weighted penalty term. In order to gain a better understanding and to allow for an informed model selection, we directly solve the corresponding multiobjective optimization problem (MOP) that arises when we minimize the main objective and the $ell_1$-norm simultaneously. As this MOP is in general non-convex for nonlinear objectives, the weighting method will fail to provide all optimal compromises. To avoid this issue, we present a continuation method which is specifically tailored to MOPs with two objective functions one of which is the $ell_1$-norm. Our method can be seen as a generalization of well-known homotopy methods for linear regression problems to the nonlinear case. Several numerical examples - including neural network training - demonstrate our theoretical findings and the additional insight that can be gained by this multiobjective approach.
115 - Jian-Wen Peng , Jie Ren 2021
In this paper, we propose some new proximal quasi-Newton methods with line search or without line search for a special class of nonsmooth multiobjective optimization problems, where each objective function is the sum of a twice continuously different iable strongly convex function and a proper convex but not necessarily differentiable function. In these new proximal quasi-Newton methods, we approximate the Hessian matrices by using the well known BFGS, self-scaling BFGS, and the Huang BFGS method. We show that each accumulation point of the sequence generated by these new algorithms is a Pareto stationary point of the multiobjective optimization problem. In addition, we give their applications in robust multiobjective optimization, and we show that the subproblems of proximal quasi-Newton algorithms can be regarded as quadratic programming problems. Numerical experiments are carried out to verify the effectiveness of the proposed method.
49 - R. Andreani , A. L. Custodio , 2020
Derivatives are an important tool for single-objective optimization. In fact, it is commonly accepted that derivative-based methods present a better performance than derivative-free optimization approaches. In this work, we will show that the same do es not apply to multiobjective derivative-based optimization, when the goal is to compute an approximation to the complete Pareto front of a given problem. The competitiveness of Direct MultiSearch (DMS), a robust and efficient derivative-free optimization algorithm, will be stated for derivative-based multiobjective optimization problems. We will then assess the potential enrichment of adding first-order information to the DMS framework. Derivatives will be used to prune the positive spanning sets considered at the poll step of the algorithm, highlighting the role that ascent directions, that conform to the geometry of the nearby feasible region, can have. Both variants of DMS show to be competitive against a state-of-art derivative-based algorithm. Moreover, for reasonable small budgets of function evaluations, the new variant is not only competitive with the derivative-based solver but also with the original implementation of DMS.
Aggregation functions largely determine the convergence and diversity performance of multi-objective evolutionary algorithms in decomposition methods. Nevertheless, the traditional Tchebycheff function does not consider the matching relationship betw een the weight vectors and candidate solutions. In this paper, the concept of matching degree is proposed which employs vectorial angles between weight vectors and candidate solutions. Based on the matching degree, a new modified Tchebycheff aggregation function is proposed, which integrates matching degree into the Tchebycheff aggregation function. Moreover, the proposed decomposition method has the same functionality with the Tchebycheff aggregation function. Based on the proposed decomposition approach, a new multiobjective optimization algorithm named decomposition-based multi-objective state transition algorithm is proposed. Relevant experimental results show that the proposed algorithm is highly competitive in comparison with other state-of-the-art multiobjetive optimization algorithms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا