ﻻ يوجد ملخص باللغة العربية
In this paper we propose two behavioral distances that support approximate reasoning on Stochastic Markov Models (SMMs), that are continuous-time stochastic transition systems where the residence time on each state is described by a generic probability measure on the positive real line. In particular, we study the problem of measuring the behavioral dissimilarity of two SMMs against linear real-time specifications expressed as Metric Temporal Logic (MTL) formulas or Deterministic Timed-Automata (DTA). The most natural choice for such a distance is the one that measures the maximal difference that can be observed comparing two SMMs with respect to their probability of satisfying an arbitrary specification. We show that computing this metric is NP-hard. In addition, we show that any algorithm that approximates the distance within a certain absolute error, depending on the size of the SMMs, is NP-hard. Nevertheless, we introduce an alternative distance, based on the Kantorovich metric, that is an over-approximation of the former and we show that, under mild assumptions on the residence time distributions, it can be computed in polynomial time.
Semi-Markov processes are Markovian processes in which the firing time of the transitions is modelled by probabilistic distributions over positive reals interpreted as the probability of firing a transition at a certain moment in time. In this paper
We consider Markov decision processes (MDP) as generators of sequences of probability distributions over states. A probability distribution is p-synchronizing if the probability mass is at least p in a single state, or in a given set of states. We co
Stochastic variational inference for collapsed models has recently been successfully applied to large scale topic modelling. In this paper, we propose a stochastic collapsed variational inference algorithm for hidden Markov models, in a sequential da
Many complex systems can be described by population models, in which a pool of agents interacts and produces complex collective behaviours. We consider the problem of verifying formal properties of the underlying mathematical representation of these
Stochastic gradient MCMC (SG-MCMC) algorithms have proven useful in scaling Bayesian inference to large datasets under an assumption of i.i.d data. We instead develop an SG-MCMC algorithm to learn the parameters of hidden Markov models (HMMs) for tim