No Arabic abstract
The process algebra HYPE was recently proposed as a fine-grained modelling approach for capturing the behaviour of hybrid systems. In the original proposal, each flow or influence affecting a variable is modelled separately and the overall behaviour of the system then emerges as the composition of these flows. The discrete behaviour of the system is captured by instantaneous actions which might be urgent, taking effect as soon as some activation condition is satisfied, or non-urgent meaning that they can tolerate some (unknown) delay before happening. In this paper we refine the notion of non-urgent actions, to make such actions governed by a probability distribution. As a consequence of this we now give HYPE a semantics in terms of Transition-Driven Stochastic Hybrid Automata, which are a subset of a general class of stochastic processes termed Piecewise Deterministic Markov Processes.
Stochastic HYPE is a novel process algebra that models stochastic, instantaneous and continuous behaviour. It develops the flow-based approach of the hybrid process algebra HYPE by replacing non-urgent events with events with exponentially-distributed durations and also introduces random resets. The random resets allow for general stochasticity, and in particular allow for the use of event durations drawn from distributions other than the exponential distribution. To account for stochasticity, the semantics of stochastic HYPE target piecewise deterministic Markov processes (PDMPs), via intermediate transition-driven stochastic hybrid automata (TDSHA) in contrast to the hybrid automata used as semantic target for HYPE. Stochastic HYPE models have a specific structure where the controller of a system is separate from the continuous aspect of this system providing separation of concerns and supporting reasoning. A novel equivalence is defined which captures when two models have the same stochastic behaviour (as in stochastic bisimulation), instantaneous behaviour (as in classical bisimulation) and continuous behaviour. These techniques are illustrated via an assembly line example.
We present a sound and complete method for the verification of qualitative liveness properties of replicated systems under stochastic scheduling. These are systems consisting of a finite-state program, executed by an unknown number of indistinguishable agents, where the next agent to make a move is determined by the result of a random experiment. We show that if a property of such a system holds, then there is always a witness in the shape of a Presburger stage graph: a finite graph whose nodes are Presburger-definable sets of configurations. Due to the high complexity of the verification problem (non-elementary), we introduce an incomplete procedure for the construction of Presburger stage graphs, and implement it on top of an SMT solver. The procedure makes extensive use of the theory of well-quasi-orders, and of the structural theory of Petri nets and vector addition systems. We apply our results to a set of benchmarks, in particular to a large collection of population protocols, a model of distributed computation extensively studied by the distributed computing community.
Many network optimization problems can be formulated as stochastic network design problems in which edges are present or absent stochastically. Furthermore, protective actions can guarantee that edges will remain present. We consider the problem of finding the optimal protection strategy under a budget limit in order to maximize some connectivity measurements of the network. Previous approaches rely on the assumption that edges are independent. In this paper, we consider a more realistic setting where multiple edges are not independent due to natural disasters or regional events that make the states of multiple edges stochastically correlated. We use Markov Random Fields to model the correlation and define a new stochastic network design framework. We provide a novel algorithm based on Sample Average Approximation (SAA) coupled with a Gibbs or XOR sampler. The experimental results on real road network data show that the policies produced by SAA with the XOR sampler have higher quality and lower variance compared to SAA with Gibbs sampler.
In the classical synthesis problem, we are given an LTL formula psi over sets of input and output signals, and we synthesize a transducer that realizes psi. One weakness of automated synthesis in practice is that it pays no attention to the quality of the synthesized system. Indeed, the classical setting is Boolean: a computation satisfies a specification or does not satisfy it. Accordingly, while the synthesized system is correct, there is no guarantee about its quality. In recent years, researchers have considered extensions of the classical Boolean setting to a quantitative one. The logic LTL[F] is a multi-valued logic that augments LTL with quality operators. The satisfaction value of an LTL[F] formula is a real value in [0,1], where the higher the value is, the higher is the quality in which the computation satisfies the specification. Decision problems for LTL become search or optimization problems for LFL[F]. In particular, in the synthesis problem, the goal is to generate a transducer that satisfies the specification in the highest possible quality. Previous work considered the worst-case setting, where the goal is to maximize the quality of the computation with the minimal quality. We introduce and solve the stochastic setting, where the goal is to generate a transducer that maximizes the expected quality of a computation, subject to a given distribution of the input signals. Thus, rather than being hostile, the environment is assumed to be probabilistic, which corresponds to many realistic settings. We show that the problem is 2EXPTIME-complete, like classical LTL synthesis, and remains so in two extensions we consider: one that maximizes the expected quality while guaranteeing that the minimal quality is, with probability $1$, above a given threshold, and one that allows assumptions on the environment.
In this paper we consider a stochastic process that may experience random reset events which bring suddenly the system to the starting value and analyze the relevant statistical magnitudes. We focus our attention on monotonous continuous-time random walks with a constant drift: the process increases between the reset events, either by the effect of the random jumps, or by the action of the deterministic drift. As a result of all these combined factors interesting properties emerge, like the existence|for any drift strength|of a stationary transition probability density function, or the faculty of the model to reproduce power-law-like behavior. General formulas for two extreme statistics, the survival probability and the mean exit time, are also derived. To corroborate in an independent way the results of the paper, Monte Carlo methods were used. These numerical estimations are in full agreement with the analytical predictions.