ترغب بنشر مسار تعليمي؟ اضغط هنا

Robust exploratory mean-variance problem with drift uncertainty

309   0   0.0 ( 0 )
 نشر من قبل Chenchen Mou
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We solve a min-max problem in a robust exploratory mean-variance problem with drift uncertainty in this paper. It is verified that robust investors choose the Sharpe ratio with minimal $L^2$ norm in an admissible set. A reinforcement learning framework in the mean-variance problem provides an exploration-exploitation trade-off mechanism; if we additionally consider model uncertainty, the robust strategy essentially weights more on exploitation rather than exploration and thus reflects a more conservative optimization scheme. Finally, we use financial data to backtest the performance of the robust exploratory investment and find that the robust strategy can outperform the purely exploratory strategy and resist the downside risk in a bear market.



قيم البحث

اقرأ أيضاً

We study a general class of entropy-regularized multi-variate LQG mean field games (MFGs) in continuous time with $K$ distinct sub-population of agents. We extend the notion of actions to action distributions (exploratory actions), and explicitly der ive the optimal action distributions for individual agents in the limiting MFG. We demonstrate that the optimal set of action distributions yields an $epsilon$-Nash equilibrium for the finite-population entropy-regularized MFG. Furthermore, we compare the resulting solutions with those of classical LQG MFGs and establish the equivalence of their existence.
In this paper we study a class of time-inconsistent terminal Markovian control problems in discrete time subject to model uncertainty. We combine the concept of the sub-game perfect strategies with the adaptive robust stochastic to tackle the theoret ical aspects of the considered stochastic control problem. Consequently, as an important application of the theoretical results, by applying a machine learning algorithm we solve numerically the mean-variance portfolio selection problem under the model uncertainty.
We study robust convex quadratic programs where the uncertain problem parameters can contain both continuous and integer components. Under the natural boundedness assumption on the uncertainty set, we show that the generic problems are amenable to ex act copositive programming reformulations of polynomial size. These convex optimization problems are NP-hard but admit a conservative semidefinite programming (SDP) approximation that can be solved efficiently. We prove that the popular approximate S-lemma method --- which is valid only in the case of continuous uncertainty --- is weaker than our approximation. We also show that all results can be extended to the two-stage robust quadratic optimization setting if the problem has complete recourse. We assess the effectiveness of our proposed SDP reformulations and demonstrate their superiority over the state-of-the-art solution schemes on instances of least squares, project management, and multi-item newsvendor problems.
95 - Jeremiah Birrell 2020
Distributionally robust optimization (DRO) is a widely used framework for optimizing objective functionals in the presence of both randomness and model-form uncertainty. A key step in the practical solution of many DRO problems is a tractable reformu lation of the optimization over the chosen model ambiguity set, which is generally infinite dimensional. Previous works have solved this problem in the case where the objective functional is an expected value. In this paper we study objective functionals that are the sum of an expected value and a variance penalty term. We prove that the corresponding variance-penalized DRO problem over an $f$-divergence neighborhood can be reformulated as a finite-dimensional convex optimization problem. This result also provides tight uncertainty quantification bounds on the variance.
Robust control is a core approach for controlling systems with performance guarantees that are robust to modeling error, and is widely used in real-world systems. However, current robust control approaches can only handle small system uncertainty, an d thus require significant effort in system identification prior to controller design. We present an online approach that robustly controls a nonlinear system under large model uncertainty. Our approach is based on decomposing the problem into two sub-problems, robust control design (which assumes small model uncertainty) and chasing consistent models, which can be solved using existing tools from control theory and online learning, respectively. We provide a learning convergence analysis that yields a finite mistake bound on the number of times performance requirements are not met and can provide strong safety guarantees, by bounding the worst-case state deviation. To the best of our knowledge, this is the first approach for online robust control of nonlinear systems with such learning theoretic and safety guarantees. We also show how to instantiate this framework for general robotic systems, demonstrating the practicality of our approach.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا