ﻻ يوجد ملخص باللغة العربية
In behavioural economics, a decision makers preferences are expressed by choice functions. Preference robust optimization (PRO) is concerned with problems where the decision makers preferences are ambiguous, and the optimal decision is based on a robust choice function with respect to a preference ambiguity set. In this paper, we propose a PRO model to support choice functions that are: (i) monotonic (prefer more to less), (ii) quasi-concave (prefer diversification), and (iii) multi-attribute (have multiple objectives/criteria). As our main result, we show that the robust choice function can be constructed efficiently by solving a sequence of linear programming problems. Then, the robust choice function can be optimized efficiently by solving a sequence of convex optimization problems. Our numerical experiments for the portfolio optimization and capital allocation problems show that our method is practical and scalable.
Decision makers preferences are often captured by some choice functions which are used to rank prospects. In this paper, we consider ambiguity in choice functions over a multi-attribute prospect space. Our main result is a robust preference model whe
In this paper, we consider a multistage expected utility maximization problem where the decision makers utility function at each stage depends on historical data and the information on the true utility function is incomplete. To mitigate the risk ari
This paper considers nonlinear regular-singular stochastic optimal control of large insurance company. The company controls the reinsurance rate and dividend payout process to maximize the expected present value of the dividend pay-outs until the tim
In this paper, we examine the effect of background risk on portfolio selection and optimal reinsurance design under the criterion of maximizing the probability of reaching a goal. Following the literature, we adopt dependence uncertainty to model the
This paper studies the matrix completion problem under arbitrary sampling schemes. We propose a new estimator incorporating both max-norm and nuclear-norm regularization, based on which we can conduct efficient low-rank matrix recovery using a random