ترغب بنشر مسار تعليمي؟ اضغط هنا

Reducing Weak to Strong Bisimilarity in CCP

95   0   0.0 ( 0 )
 نشر من قبل EPTCS
 تاريخ النشر 2012
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Concurrent constraint programming (ccp) is a well-established model for concurrency that singles out the fundamental aspects of asynchronous systems whose agents (or processes) evolve by posting and querying (partial) information in a global medium. Bisimilarity is a standard behavioural equivalence in concurrency theory. However, only recently a well-behaved notion of bisimilarity for ccp, and a ccp partition refinement algorithm for deciding the strong version of this equivalence have been proposed. Weak bisimiliarity is a central behavioural equivalence in process calculi and it is obtained from the strong case by taking into account only the actions that are observable in the system. Typically, the standard partition refinement can also be used for deciding weak bisimilarity simply by using Milners reduction from weak to strong bisimilarity; a technique referred to as saturation. In this paper we demonstrate that, because of its involved labeled transitions, the above-mentioned saturation technique does not work for ccp. We give an alternative reduction from weak ccp bisimilarity to the strong one that allows us to use the ccp partition refinement algorithm for deciding this equivalence.

قيم البحث

اقرأ أيضاً

We introduce a transformation system for concurrent constraint programming (CCP). We define suitable applicability conditions for the transformations which guarantee that the input/output CCP semantics is preserved also when distinguishing deadlocked computations from successful ones and when considering intermediate results of (possibly) non-terminating computations. The system allows us to optimize CCP programs while preserving their intended meaning: In addition to the usual benefits that one has for sequential declarative languages, the transformation of concurrent programs can also lead to the elimination of communication channels and of synchronization points, to the transformation of non-deterministic computations into deterministic ones, and to the crucial saving of computational space. Furthermore, since the transformation system preserves the deadlock behavior of programs, it can be used for proving deadlock freeness of a given program wrt a class of queries. To this aim it is sometimes sufficient to apply our transformations and to specialize the resulting program wrt the given queries in such a way that the obtained program is trivially deadlock free.
79 - Ross Horne , Sjouke Mauw 2020
We uncover privacy vulnerabilities in the ICAO 9303 standard implemented by ePassports worldwide. These vulnerabilities, confirmed by ICAO, enable an ePassport holder who recently passed through a checkpoint to be reidentified without opening their e Passport. This paper explains how bisimilarity was used to discover these vulnerabilities, which exploit the BAC protocol - the original ICAO 9303 standard ePassport authentication protocol - and remains valid for the PACE protocol, which improves on the security of BAC in the latest ICAO 9303 standards. In order to tackle such bisimilarity problems, we develop here a chain of methods for the applied $pi$-calculus including a symbolic under-approximation of bisimilarity, called open bisimilarity, and a modal logic, called classical FM, for describing and certifying attacks. Evidence is provided to argue for a new scheme for specifying such unlinkability problems that more accurately reflects the capabilities of an attacker.
The probabilistic bisimilarity distance of Deng et al. has been proposed as a robust quantitative generalization of Segala and Lynchs probabilistic bisimilarity for probabilistic automata. In this paper, we present a characterization of the bisimilar ity distance as the solution of a simple stochastic game. The characterization gives us an algorithm to compute the distances by applying Condons simple policy iteration on these games. The correctness of Condons approach, however, relies on the assumption that the games are stopping. Our games may be non-stopping in general, yet we are able to prove termination for this extended class of games. Already other algorithms have been proposed in the literature to compute these distances, with complexity in $textbf{UP} cap textbf{coUP}$ and textbf{PPAD}. Despite the theoretical relevance, these algorithms are inefficient in practice. To the best of our knowledge, our algorithm is the first practical solution. The characterization of the probabilistic bisimilarity distance mentioned above crucially uses a dual presentation of the Hausdorff distance due to Memoli. As an additional contribution, in this paper we show that Memolis result can be used also to prove that the bisimilarity distance bounds the difference in the maximal (or minimal) probability of two states to satisfying arbitrary $omega$-regular properties, expressed, eg., as LTL formulas.
320 - Ross Horne 2018
This paper is the first thorough investigation into the coarsest notion of bisimilarity for the applied pi-calculus that is a congruence relation: open barbed bisimilarity. An open variant of labelled bisimilarity (quasi-open bisimilarity), better su ited to constructing bisimulations, is proven to coincide with open barbed bisimilarity. These bisimilary congruences are shown to be characterised by an intuitionistic modal logic that can be used, for example, to describe an attack on privacy whenever a privacy property is violated. Open barbed bisimilarity provides a compositional approach to verifying cryptographic protocols, since properties proven can be reused in any context, including under input prefix. Furthermore, open barbed bisimilarity is sufficiently coarse for reasoning about security and privacy properties of cryptographic protocols; in constrast to the finer bisimilarity congruence, open bisimilarity, which cannot verify certain privacy properties.
107 - David S. Hardin 2014
In our current work a library of formally verified software components is to be created, and assembled, using the Low-Level Virtual Machine (LLVM) intermediate form, into subsystems whose top-level assurance relies on the assurance of the individual components. We have thus undertaken a project to build a translator from LLVM to the applicative subset of Common Lisp accepted by the ACL2 theorem prover. Our translator produces executable ACL2 formal models, allowing us to both prove theorems about the translated models as well as validate those models by testing. The resulting models can be translated and certified without user intervention, even for code with loops, thanks to the use of the def::ung macro which allows us to defer the question of termination. Initial measurements of concrete execution for translated LLVM functions indicate that performance is nearly 2.4 million LLVM instructions per second on a typical laptop computer. In this paper we overview the translation process and illustrate the translators capabilities by way of a concrete example, including both a functional correctness theorem as well as a validation test for that example.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا