ﻻ يوجد ملخص باللغة العربية
This paper deals with the long run average continuous control problem of piecewise deterministic Markov processes (PDMPs) taking values in a general Borel space and with compact action space depending on the state variable. The control variable acts on the jump rate and transition measure of the PDMP, and the running and boundary costs are assumed to be positive but not necessarily bounded. Our first main result is to obtain an optimality equation for the long run average cost in terms of a discrete-time optimality equation related to the embedded Markov chain given by the post-jump location of the PDMP. Our second main result guarantees the existence of a feedback measurable selector for the discrete-time optimality equation by establishing a connection between this equation and an integro-differential equation. Our final main result is to obtain some sufficient conditions for the existence of a solution for a discrete-time optimality inequality and an ordinary optimal feedback control for the long run average cost using the so-called vanishing discount approach.
The main goal of this paper is to derive sufficient conditions for the existence of an optimal control strategy for the long run average continuous control problem of piecewise deterministic Markov processes (PDMPs) taking values in a general Borel s
The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMPs) taking values in a general Borel space and with compact act
The time it takes the fastest searcher out of $Ngg1$ searchers to find a target determines the timescale of many physical, chemical, and biological processes. This time is called an extreme first passage time (FPT) and is typically much faster than t
We consider a piecewise-deterministic Markov process (PDMP) with general conditional distribution of inter-occurrence time, which is called a general PDMP here. Our purpose is to establish the theory of measure-valued generator for general PDMPs. The
We consider a general piecewise deterministic Markov process (PDMP) $X={X_t}_{tgeqslant 0}$ with measure-valued generator $mathcal{A}$, for which the conditional distribution function of the inter-occurrence time is not necessarily absolutely continu