ترغب بنشر مسار تعليمي؟ اضغط هنا

On the relative value iteration with a risk-sensitive criterion

73   0   0.0 ( 0 )
 نشر من قبل Ari Arapostathis
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

A multiplicative relative value iteration algorithm for solving the dynamic programming equation for the risk-sensitive control problem is studied for discrete time controlled Markov chains with a compact Polish state space, and controlled diffusions in on the whole Euclidean space. The main result is a proof of convergence to the desired limit in each case.

قيم البحث

اقرأ أيضاً

In this article we consider the ergodic risk-sensitive control problem for a large class of multidimensional controlled diffusions on the whole space. We study the minimization and maximization problems under either a blanket stability hypothesis, or a near-monotone assumption on the running cost. We establish the convergence of the policy improvement algorithm for these models. We also present a more general result concerning the region of attraction of the equilibrium of the algorithm.
The paper solves constrained Dynkin games with risk-sensitive criteria, where two players are allowed to stop at two independent Poisson random intervention times, via the theory of backward stochastic differential equations. This generalizes the pre vious work of [Liang and Sun, Dynkin games with Poisson random intervention times, SIAM Journal on Control and Optimization, 2019] from the risk-neutral criteria and common signal times for both players to the risk-sensitive criteria and two heterogenous signal times. Furthermore, the paper establishes a connection of such constrained risk-sensitive Dynkin games with a class of stochastic differential games via Krylovs randomized stopping technique.
We consider a large family of discrete and continuous time controlled Markov processes and study an ergodic risk-sensitive minimization problem. Under a blanket stability assumption, we provide a complete analysis to this problem. In particular, we e stablish uniqueness of the value function and verification result for optimal stationary Markov controls, in addition to the existence results. We also revisit this problem under a near-monotonicity condition but without any stability hypothesis. Our results also include policy improvement algorithms both in discrete and continuous time frameworks.
70 - Paul Dupuis , Vaios Laschos , 2018
We study sequences, parametrized by the number of agents, of many agent exit time stochastic control problems with risk-sensitive cost structure. We identify a fully characterizing assumption, under which each of such control problem corresponds to a risk-neutral stochastic control problem with additive cost, and sequentially to a risk-neutral stochastic control problem on the simplex, where the specific information about the state of each agent can be discarded. We also prove that, under some additional assumptions, the sequence of value functions converges to the value function of a deterministic control problem, which can be used for the design of nearly optimal controls for the original problem, when the number of agents is sufficiently large.
We propose a generalization of the classical notion of the $V@R_{lambda}$ that takes into account not only the probability of the losses, but the balance between such probability and the amount of the loss. This is obtained by defining a new class of law invariant risk measures based on an appropriate family of acceptance sets. The $V@R_{lambda}$ and other known law invariant risk measures turn out to be special cases of our proposal. We further prove the dual representation of Risk Measures on $mathcal{P}(% mathbb{R}).$
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا