ﻻ يوجد ملخص باللغة العربية
We study runtime monitoring of $omega$-regular properties. We consider a simple setting in which a run of an unknown finite-state Markov chain $mathcal M$ is monitored against a fixed but arbitrary $omega$-regular specification $varphi$. The purpose of monitoring is to keep aborting runs that are unlikely to satisfy the specification until $mathcal M$ executes a correct run. We design controllers for the reset action that (assuming that $varphi$ has positive probability) satisfy the following property w.p.1: the number of resets is finite, and the run executed by $mathcal M$ after the last reset satisfies $varphi$.
In this paper, we address the approximate minimization problem of Markov Chains (MCs) from a behavioral metric-based perspective. Specifically, given a finite MC and a positive integer k, we are looking for an MC with at most k states having minimal
Dealing with finite Markov chains in discrete time, the focus often lies on convergence behavior and one tries to make different copies of the chain meet as fast as possible and then stick together. There is, however, a very peculiar kind of discrete
We review recent results on the metastable behavior of continuous-time Markov chains derived through the characterization of Markov chains as unique solutions of martingale problems.
We introduce the space of virtual Markov chains (VMCs) as a projective limit of the spaces of all finite state space Markov chains (MCs), in the same way that the space of virtual permutations is the projective limit of the spaces of all permutations
This paper introduces two mechanisms for computing over-approximations of sets of reachable states, with the aim of ensuring termination of state-space exploration. The first mechanism consists in over-approximating the automata representing reachabl