ﻻ يوجد ملخص باللغة العربية
We give a brief presentation of the capacity theory and show how it derives naturally a measurable selection theorem following the approach of Dellacherie (1972). Then we present the classical method to prove the dynamic programming of discrete time stochastic control problem, using measurable selection arguments. At last, we propose a continuous time extension, that is an abstract framework for the continuous time dynamic programming principle (DPP).
We aim to give an overview on how to derive the dynamic programming principle for a general stochastic control/stopping problem, using measurable selection techniques. By considering their martingale problem formulation, we show how to check the required measurability conditions for differe
For years, there has been interest in approximation methods for solving dynamic programming problems, because of the inherent complexity in computing optimal solutions characterized by Bellmans principle of optimality. A wide range of approximate dyn
This paper discusses the odds problem, proposed by Bruss in 2000, and its variants. A recurrence relation called a dynamic programming (DP) equation is used to find an optimal stopping policy of the odds problem and its variants. In 2013, Buchbinder,
We propose a discretization of the optimality principle in dynamic programming based on radial basis functions and Shepards moving least squares approximation method. We prove convergence of the approximate optimal value function to the true one and present several numerical experiments.
(Renegar, 2016) introduced a novel approach to transforming generic conic optimization problems into unconstrained, uniformly Lipschitz continuous minimization. We introduce radial transformations generalizing these ideas, equipped with an entirely n