ترغب بنشر مسار تعليمي؟ اضغط هنا

234 - Rainer Buckdahn 2014
We investigate a two-player zero-sum differential game with asymmetric information on the payoff and without Isaacs condition. The dynamics is an ordinary differential equation parametrised by two controls chosen by the players. Each player has a pri vate information on the payoff of the game, while his opponent knows only the probability distribution on the information of the other player. We show that a suitable definition of random strategies allows to prove the existence of a value in mixed strategies. Moreover, the value function can be characterised in term of the unique viscosity solution in some dual sense of a Hamilton-Jacobi-Isaacs equation. Here we do not suppose the Isaacs condition which is usually assumed in differential games.
We study a two-player, zero-sum, stochastic game with incomplete information on one side in which the players are allowed to play more and more frequently. The informed player observes the realization of a Markov chain on which the payoffs depend, wh ile the non-informed player only observes his opponents actions. We show the existence of a limit value as the time span between two consecutive stages vanishes; this value is characterized through an auxiliary optimization problem and as the solution of an Hamilton-Jacobi equation.
62 - Rainer Buckdahn 2009
We consider a stochastic control problem which is composed of a controlled stochastic differential equation, and whose associated cost functional is defined through a controlled backward stochastic differential equation. Under appropriate convexity a ssumptions on the coefficients of the forward and the backward equations we prove the existence of an optimal control on a suitable reference stochastic system. The proof is based on an approximation of the stochastic control problem by a sequence of control problems with smooth coefficients, admitting an optimal feedback control. The quadruplet formed by this optimal feedback control and the associated solution of the forward and the backward equations is shown to converge in law, at least along a subsequence. The convexity assumptions on the coefficients then allow to construct from this limit an admissible control process which, on an appropriate reference stochastic system, is optimal for our stochastic control problem.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا