ترغب بنشر مسار تعليمي؟ اضغط هنا

Approximate Abstractions of Markov Chains with Interval Decision Processes (Extended Version)

102   0   0.0 ( 0 )
 نشر من قبل Yuriy Zacchia Lun
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This work introduces a new abstraction technique for reducing the state space of large, discrete-time labelled Markov chains. The abstraction leverages the semantics of interval Markov decision processes and the existing notion of approximate probabilistic bisimulation. Whilst standard abstractions make use of abstract points that are taken from the state space of the concrete model and which serve as representatives for sets of concrete states, in this work the abstract structure is constructed considering abstract points that are not necessarily selected from the states of the concrete model, rather they are a function of these states. The resulting model presents a smaller one-step bisimulation error, when compared to a like-sized, standard Markov chain abstraction. We outline a method to perform probabilistic model checking, and show that the computational complexity of the new method is comparable to that of standard abstractions based on approximate probabilistic bisimulations.

قيم البحث

اقرأ أيضاً

Interval Markov decision processes (IMDPs) generalise classical MDPs by having interval-valued transition probabilities. They provide a powerful modelling tool for probabilistic systems with an additional variation or uncertainty that prevents the kn owledge of the exact transition probabilities. In this paper, we consider the problem of multi-objective robust strategy synthesis for interval MDPs, where the aim is to find a robust strategy that guarantees the satisfaction of multiple properties at the same time in face of the transition probability uncertainty. We first show that this problem is PSPACE-hard. Then, we provide a value iteration-based decision algorithm to approximate the Pareto set of achievable points. We finally demonstrate the practical effectiveness of our proposed approaches by applying them on several case studies using a prototypical tool.
Automata learning techniques automatically generate system models from test observations. These techniques usually fall into two categories: passive and active. Passive learning uses a predetermined data set, e.g., system logs. In contrast, active le arning actively queries the system under learning, which is considered more efficient. An influential active learning technique is Angluins L* algorithm for regular languages which inspired several generalisations from DFAs to other automata-based modelling formalisms. In this work, we study L*-based learning of deterministic Markov decision processes, first assuming an ideal setting with perfect information. Then, we relax this assumption and present a novel learning algorithm that collects information by sampling system traces via testing. Experiments with the implementation of our sampling-based algorithm suggest that it achieves better accuracy than state-of-the-art passive learning techniques with the same amount of test data. Unlike existing learning algorithms with predefined states, our algorithm learns the complete model structure including the states.
In this work we introduce new approximate similarity relations that are shown to be key for policy (or control) synthesis over general Markov decision processes. The models of interest are discrete-time Markov decision processes, endowed with uncount ably-infinite state spaces and metric output (or observation) spaces. The new relations, underpinned by the use of metrics, allow in particular for a useful trade-off between deviations over probability distributions on states, and distances between model outputs. We show that the new probabilistic similarity relations, inspired by a notion of simulation developed for finite-state models, can be effectively employed over general Markov decision processes for verification purposes, and specifically for control refinement from abstract models.
Model checking probabilistic CTL properties of Markov decision processes with convex uncertainties has been recently investigated by Puggelli et al. Such model checking algorithms typically suffer from the state space explosion. In this paper, we add ress probabilistic bisimulation to reduce the size of such an MDP while preserving the probabilistic CTL properties it satisfies. In particular, we discuss the key ingredients to build up the operations of parallel composition for composing interval MDP components at run-time. More precisely, we investigate how the parallel composition operator for interval MDPs can be defined so as to arrive at a congruence closure. As a result, we show that probabilistic bisimulation for interval MDPs is congruence with respect to two facets of parallelism, namely synchronous product and interleaving.
In most real cases transition probabilities between operational modes of Markov jump linear systems cannot be computed exactly and are time-varying. We take into account this aspect by considering Markov jump linear systems where the underlying Marko v chain is polytopic and time-inhomogeneous, i.e. its transition probability matrix is varying over time, with variations that are arbitrary within a polytopic set of stochastic matrices. We address and solve for this class of systems the infinite-horizon optimal control problem. In particular, we show that the optimal controller can be obtained from a set of coupled algebraic Riccati equations, and that for mean square stabilizable systems the optimal finite-horizon cost corresponding to the solution to a parsimonious set of coupled difference Riccati equations converges exponentially fast to the optimal infinite-horizon cost related to the set of coupled algebraic Riccati equations. All the presented concepts are illustrated on a numerical example showing the efficiency of the provided solution.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا