ترغب بنشر مسار تعليمي؟ اضغط هنا

A Complete Quantitative Deduction System for the Bisimilarity Distance on Markov Chains

122   0   0.0 ( 0 )
 نشر من قبل Christoph Rauch
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper we propose a complete axiomatization of the bisimilarity distance of Desharnais et al. for the class of finite labelled Markov chains. Our axiomatization is given in the style of a quantitative extension of equational logic recently proposed by Mardare, Panangaden, and Plotkin (LICS 2016) that uses equality relations $t equiv_varepsilon s$ indexed by rationals, expressing that `$t$ is approximately equal to $s$ up to an error $varepsilon$. Notably, our quantitative deduction system extends in a natural way the equational system for probabilistic bisimilarity given by Stark and Smolka by introducing an axiom for dealing with the Kantorovich distance between probability distributions. The axiomatization is then used to propose a metric extension of a Kleenes style representation theorem for finite labelled Markov chains, that was proposed (in a more general coalgebraic fashion) by Silva et al. (Inf. Comput. 2011).


قيم البحث

اقرأ أيضاً

A new weak bisimulation semantics is defined for Markov automata that, in addition to abstracting from internal actions, sums up the expected values of consecutive exponentially distributed delays possibly intertwined with internal actions. The resul ting equivalence is shown to be a congruence with respect to parallel composition for Markov automata. Moreover, it turns out to be comparable with weak bisimilarity for timed labeled transition systems, thus constituting a step towards reconciling the semantics for stochastic time and deterministic time.
In a deduction system with some propositions and some known relations among these propositions, people usually care about the minimum of propositions by which all other propositions can be deduced according to these known relations. Here we call it a minimizing deduction system. Its common solution is the guess and determine method. In this paper we propose a method of solving the minimizing deduction system based on MILP. Firstly, we introduce the conceptions of state variable, path variable and state copy, which enable us to characterize all rules by inequalities. Then we reduce the deduction problem to a MILP problem and solve it by the Gurobi optimizer. As its applications, we analyze the security of two stream ciphers SNOW2.0 and Enocoro-128v2 in resistance to guess and determine attacks. For SNOW 2.0, it is surprising that it takes less than 0.1s to get the best solution of 9 known variables in a personal Macbook Air(Early 2015, Double Intel Core i5 1.6GHZ, 4GB DDR3). For Enocoro-128v2, we get the best solution of 18 known variables within 3 minutes. Whats more, we propose two improvements to reduce the number of variables and inequalities which significantly decrease the scale of the MILP problem.
Convergence rates of Markov chains have been widely studied in recent years. In particular, quantitative bounds on convergence rates have been studied in various forms by Meyn and Tweedie [Ann. Appl. Probab. 4 (1994) 981-1101], Rosenthal [J. Amer. St atist. Assoc. 90 (1995) 558-566], Roberts and Tweedie [Stochastic Process. Appl. 80 (1999) 211-229], Jones and Hobert [Statist. Sci. 16 (2001) 312-334] and Fort [Ph.D. thesis (2001) Univ. Paris VI]. In this paper, we extend a result of Rosenthal [J. Amer. Statist. Assoc. 90 (1995) 558-566] that concerns quantitative convergence rates for time-homogeneous Markov chains. Our extension allows us to consider f-total variation distance (instead of total variation) and time-inhomogeneous Markov chains. We apply our results to simulated annealing.
This paper considers large families of Markov chains (MCs) that are defined over a set of parameters with finite discrete domains. Such families occur in software product lines, planning under partial observability, and sketching of probabilistic pro grams. Simple questions, like `does at least one family member satisfy a property?, are NP-hard. We tackle two problems: distinguish family members that satisfy a given quantitative property from those that do not, and determine a family member that satisfies the property optimally, i.e., with the highest probability or reward. We show that combining two well-known techniques, MDP model checking and abstraction refinement, mitigates the computational complexity. Experiments on a broad set of benchmarks show that in many situations, our approach is able to handle families of millions of MCs, providing superior scalability compared to existing solutions.
We are interested in the analysis of very large continuous-time Markov chains (CTMCs) with many distinct rates. Such models arise naturally in the context of reliability analysis, e.g., of computer network performability analysis, of power grids, of computer virus vulnerability, and in the study of crowd dynamics. We use abstraction techniques together with novel algorithms for the computation of bounds on the expected final and accumulated rewards in continuous-time Markov decision processes (CTMDPs). These ingredients are combined in a partly symbolic and partly explicit (symblicit) analysis approach. In particular, we circumvent the use of multi-terminal decision diagrams, because the latter do not work well if facing a large number of different rates. We demonstrate the practical applicability and efficiency of the approach on two case studies.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا