ﻻ يوجد ملخص باللغة العربية
This paper extends to Continuous-Time Jump Markov Decision Processes (CTJMDP) the classic result for Markov Decision Processes stating that, for a given initial state distribution, for every policy there is a (randomized) Markov policy, which can be defined in a natural way, such that at each time instance the marginal distributions of state-action pairs for these two policies coincide. It is shown in this paper that this equality takes place for a CTJMDP if the corresponding Markov policy defines a nonexplosive jump Markov process. If this Markov process is explosive, then at each time instance the marginal probability, that a state-action pair belongs to a measurable set of state-action pairs, is not greater for the described Markov policy than the same probability for the original policy. These results are used in this paper to prove that for expected discounted total costs and for average costs per unit time, for a given initial state distribution, for each policy for a CTJMDP the described a Markov policy has the same or better performance.
This paper describes the structure of solutions to Kolmogorovs equations for nonhomogeneous jump Markov processes and applications of these results to control of jump stochastic systems. These equations were studied by Feller (1940), who clarified in
The objective of this work is to study continuous-time Markov decision processes on a general Borel state space with both impulsive and continuous controls for the infinite-time horizon discounted cost. The continuous-time controlled process is shown
In a variety of applications, an agents success depends on the knowledge that an adversarial observer has or can gather about the agents decisions. It is therefore desirable for the agent to achieve a task while reducing the ability of an observer to
The aim of this paper is to propose a new numerical approximation of the Kalman-Bucy filter for semi-Markov jump linear systems. This approximation is based on the selection of typical trajectories of the driving semi-Markov chain of the process by u
We study the problem of synthesizing a controller that maximizes the entropy of a partially observable Markov decision process (POMDP) subject to a constraint on the expected total reward. Such a controller minimizes the predictability of an agents t