ترغب بنشر مسار تعليمي؟ اضغط هنا

Parameter Synthesis for Markov Models: Faster Than Ever

85   0   0.0 ( 0 )
 نشر من قبل Nils Jansen
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose a simple technique for verifying probabilistic models whose transition probabilities are parametric. The key is to replace parametric transitions by nondeterministic choices of extremal values. Analysing the resulting parameter-free model using off-the-shelf means yields (refinable) lower and upper bounds on probabilities of regions in the parameter space. The technique outperforms the existing analysis of parametric Markov chains by several orders of magnitude regarding both run-time and scalability. Its beauty is its applicability to various probabilistic models. It in particular provides the first sound and feasible method for performing parameter synthesis of Markov decision processes.

قيم البحث

اقرأ أيضاً

Markov chain analysis is a key technique in reliability engineering. A practical obstacle is that all probabilities in Markov models need to be known. However, system quantities such as failure rates or packet loss ratios, etc. are often not---or onl y partially---known. This motivates considering parametric models with transitions labeled with functions over parameters. Whereas traditional Markov chain analysis evaluates a reliability metric for a single, fixed set of probabilities, analysing parametric Markov models focuses on synthesising parameter values that establish a given reliability or performance specification $varphi$. Examples are: what component failure rates ensure the probability of a system breakdown to be below 0.00000001?, or which failure rates maximise reliability? This paper presents various analysis algorithms for parametric Markov chains and Markov decision processes. We focus on three problems: (a) do all parameter values within a given region satisfy $varphi$?, (b) which regions satisfy $varphi$ and which ones do not?, and (c) an approximate version of (b) focusing on covering a large fraction of all possible parameter values. We give a detailed account of the various algorithms, present a software tool realising these techniques, and report on an extensive experimental evaluation on benchmarks that span a wide range of applications.
When modeling concurrent or cyber-physical systems, non-functional requirements such as time are important to consider. In order to improve the timing aspects of a model, it is necessary to have some notion of what it means for a process to be faster than another, which can guide the stepwise refinement of the model. To this end we study a faster-than relation for semi-Markov decision processes and compare it to standard notions for relating systems. We consider the compositional aspects of this relation, and show that the faster-than relation is not a precongruence with respect to parallel composition, hence giving rise to so-called parallel timing anomalies. We take the first steps toward understanding this problem by identifying decidable conditions sufficient to avoid parallel timing anomalies in the absence of non-determinism.
We study finite-state controllers (FSCs) for partially observable Markov decision processes (POMDPs) that are provably correct with respect to given specifications. The key insight is that computing (randomised) FSCs on POMDPs is equivalent to - and computationally as hard as - synthesis for parametric Markov chains (pMCs). This correspondence allows to use tools for parameter synthesis in pMCs to compute correct-by-construction FSCs on POMDPs for a variety of specifications. Our experimental evaluation shows comparable performance to well-known POMDP solvers.
We study a phase transition in parameter learning of Hidden Markov Models (HMMs). We do this by generating sequences of observed symbols from given discrete HMMs with uniformly distributed transition probabilities and a noise level encoded in the out put probabilities. By using the Baum-Welch (BW) algorithm, an Expectation-Maximization algorithm from the field of Machine Learning, we then try to estimate the parameters of each investigated realization of an HMM. We study HMMs with n=4, 8 and 16 states. By changing the amount of accessible learning data and the noise level, we observe a phase-transition-like change in the performance of the learning algorithm. For bigger HMMs and more learning data, the learning behavior improves tremendously below a certain threshold in the noise strength. For a noise level above the threshold, learning is not possible. Furthermore, we use an overlap parameter applied to the results of a maximum-a-posteriori (Viterbi) algorithm to investigate the accuracy of the hidden state estimation around the phase transition.
Many complex systems can be described by population models, in which a pool of agents interacts and produces complex collective behaviours. We consider the problem of verifying formal properties of the underlying mathematical representation of these models, which is a Continuous Time Markov Chain, often with a huge state space. To circumvent the state space explosion, we rely on stochastic approximation techniques, which replace the large model by a simpler one, guaranteed to be probabilistically consistent. We show how to efficiently and accurately verify properties of random individual agents, specified by Continuous Stochastic Logic extended with Timed Automata (CSL-TA), and how to lift these specifications to the collective level, approximating the number of agents satisfying them using second or higher order stochastic approximation techniques.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا