No Arabic abstract
We consider a general piecewise deterministic Markov process (PDMP) $X={X_t}_{tgeqslant 0}$ with measure-valued generator $mathcal{A}$, for which the conditional distribution function of the inter-occurrence time is not necessarily absolutely continuous. A general form of the exponential martingales is presented as $$M^f_t=frac{f(X_t)}{f(X_0)}left[mathrm{Sexp}left(int_{(0,t]}frac{mathrm{d}L(mathcal{A}f)_s}{f(X_{s-})}right)right]^{-1}.$$ Using this exponential martingale as a likelihood ratio process, we define a new probability measure. It is shown that the original process remains a general PDMP under the new probability measure. And we find the new measure-valued generator and its domain.
We consider a piecewise-deterministic Markov process (PDMP) with general conditional distribution of inter-occurrence time, which is called a general PDMP here. Our purpose is to establish the theory of measure-valued generator for general PDMPs. The additive functional of a semi-dynamic system (SDS) is introduced firstly, which presents us an analytic tool for the whole paper. The additive functionals of a general PDMP are represented in terms of additive functionals of the SDS. The necessary and sufficient conditions of being a local martingale or a special semimartingale for them are given. The measure-valued generator for a general PDMP is introduced, which takes value in the space of additive functionals of the SDS. And its domain is completely described by analytic conditions. The domain is extended to the locally (path-)finite variation functions. As an application of measure-valued generator, we study the expected cumulative discounted value of an additive functional of the general PDMP, and get a measure integro-differential equation satisfied by the expected cumulative discounted value function.
This paper deals with the long run average continuous control problem of piecewise deterministic Markov processes (PDMPs) taking values in a general Borel space and with compact action space depending on the state variable. The control variable acts on the jump rate and transition measure of the PDMP, and the running and boundary costs are assumed to be positive but not necessarily bounded. Our first main result is to obtain an optimality equation for the long run average cost in terms of a discrete-time optimality equation related to the embedded Markov chain given by the post-jump location of the PDMP. Our second main result guarantees the existence of a feedback measurable selector for the discrete-time optimality equation by establishing a connection between this equation and an integro-differential equation. Our final main result is to obtain some sufficient conditions for the existence of a solution for a discrete-time optimality inequality and an ordinary optimal feedback control for the long run average cost using the so-called vanishing discount approach.
The time it takes the fastest searcher out of $Ngg1$ searchers to find a target determines the timescale of many physical, chemical, and biological processes. This time is called an extreme first passage time (FPT) and is typically much faster than the FPT of a single searcher. Extreme FPTs of diffusion have been studied for decades, but little is known for other types of stochastic processes. In this paper, we study the distribution of extreme FPTs of piecewise deterministic Markov processes (PDMPs). PDMPs are a broad class of stochastic processes that evolve deterministically between random events. Using classical extreme value theory, we prove general theorems which yield the distribution and moments of extreme FPTs in the limit of many searchers based on the short time distribution of the FPT of a single searcher. We then apply these theorems to some canonical PDMPs, including run and tumble searchers in one, two, and three space dimensions. We discuss our results in the context of some biological systems and show how our approach accounts for an unphysical property of diffusion which can be problematic for extreme statistics.
The main goal of this paper is to derive sufficient conditions for the existence of an optimal control strategy for the long run average continuous control problem of piecewise deterministic Markov processes (PDMPs) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we apply the so-called vanishing discount approach to obtain a solution to an average cost optimality inequality associated to the long run average cost problem. Our main assumptions are written in terms of some integro-differential inequalities related to the so-called expected growth condition, and geometric convergence of the post-jump location kernel associated to the PDMP.
The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMPs) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.