ترغب بنشر مسار تعليمي؟ اضغط هنا

Slicing Concurrent Constraint Programs

102   0   0.0 ( 0 )
 نشر من قبل Carlos Olarte
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Concurrent Constraint Programming (CCP) is a declarative model for concurrency where agents interact by telling and asking constraints (pieces of information) in a shared store. Some previous works have developed (approximated) declarative debuggers for CCP languages. However, the task of debugging concurrent programs remains difficult. In this paper we define a dynamic slicer for CCP and we show it to be a useful companion tool for the existing debugging techniques. Our technique starts by considering a partial computation (a trace) that shows the presence of bugs. Often, the quantity of information in such a trace is overwhelming, and the user gets easily lost, since she cannot focus on the sources of the bugs. Our slicer allows for marking part of the state of the computation and assists the user to eliminate most of the redundant information in order to highlight the errors. We show that this technique can be tailored to timed variants of CCP. We also develop a prototypical implementation freely available for making experiments.

قيم البحث

اقرأ أيضاً

Timed Concurrent Constraint Programming (tcc) is a declarative model for concurrency offering a logic for specifying reactive systems, i.e. systems that continuously interact with the environment. The universal tcc formalism (utcc) is an extension of tcc with the ability to express mobility. Here mobility is understood as communication of private names as typically done for mobile systems and security protocols. In this paper we consider the denotational semantics for tcc, and we extend it to a collecting semantics for utcc based on closure operators over sequences of constraints. Relying on this semantics, we formalize a general framework for data flow analyses of tcc and utcc programs by abstract interpretation techniques. The concrete and abstract semantics we propose are compositional, thus allowing us to reduce the complexity of data flow analyses. We show that our method is sound and parametric with respect to the abstract domain. Thus, different analyses can be performed by instantiating the framework. We illustrate how it is possible to reuse abstract domains previously defined for logic programming to perform, for instance, a groundness analysis for tcc programs. We show the applicability of this analysis in the context of reactive systems. Furthermore, we make also use of the abstract semantics to exhibit a secrecy flaw in a security protocol. We also show how it is possible to make an analysis which may show that tcc programs are suspension free. This can be useful for several purposes, such as for optimizing compilation or for debugging.
Constraint Logic Programming (CLP) is a language scheme for combining two declarative paradigms: constraint solving and logic programming. Concurrent Constraint Programming (CCP) is a declarative model for concurrency where agents interact by telling and asking constraints in a shared store. In a previous paper, we developed a framework for dynamic slicing of CCP where the user first identifies that a (partial) computation is wrong. Then, she marks (selects) some parts of the final state corresponding to the data (constraints) and processes that she wants to study more deeply. An automatic process of slicing begins, and the partial computation is depurated by removing irrelevant information. In this paper we give two major contributions. First, we extend the framework to CLP, thus generalizing the previous work. Second, we provide an assertion language suitable for both, CCP and CLP, which allows the user to specify some properties of the computations in her program. If a state in a computation does not satisfy an assertion then some wrong information is identified and an automatic slicing process can start. This way we make one step further towards automatizing the slicing process. We show that our framework can be integrated with the previous semi-automatic one, giving the user more choices and flexibility. We show by means of examples and experiments the usefulness of our approach.
Concurrent strategies based on event structures are examined from the viewpoint of may and must testing in traditional process calculi. In their pure form concurrent strategies fail to expose the deadlocks and divergences that can arise in their comp osition. This motivates an extension of the bicategory of concurrent strategies to treat the may and must behaviour of strategies under testing. One extension adjoins neutral moves to strategies but in so doing loses identities w.r.t. composition. This in turn motivates another extension in which concurrent strategies are accompanied by stopping configurations; the ensuing stopping strategies inherit the structure of a bicategory from that of strategies. The technical developments converge in providing characterisations of the may and must equivalences and preorders on strategies.
Most modern (classical) programming languages support recursion. Recursion has also been successfully applied to the design of several quantum algorithms and introduced in a couple of quantum programming languages. So, it can be expected that recursi on will become one of the fundamental paradigms of quantum programming. Several program logics have been developed for verification of quantum while-programs. However, there are as yet no general methods for reasoning about (mutual) recursive procedures and ancilla quantum data structure in quantum computing (with measurement). We fill the gap in this paper by proposing a parameterized quantum assertion logic and, based on which, designing a quantum Hoare logic for verifying parameterized recursive quantum programs with ancilla data and probabilistic control. The quantum Hoare logic can be used to prove partial, total, and even probabilistic correctness (by reducing to total correctness) of those quantum programs. In particular, two counterexamples for illustrating incompleteness of non-parameterized assertions in verifying recursive procedures, and, one counterexample for showing the failure of reasoning with exact probabilities based on partial correctness, are constructed. The effectiveness of our logic is shown by three main examples -- recursive quantum Markov chain (with probabilistic control), fixed-point Grovers search, and recursive quantum Fourier sampling.
This paper investigates the usage of generating functions (GFs) encoding measures over the program variables for reasoning about discrete probabilistic programs. To that end, we define a denotational GF-transformer semantics for probabilistic while-p rograms, and show that it instantiates Kozens seminal distribution transformer semantics. We then study the effective usage of GFs for program analysis. We show that finitely expressible GFs enable checking super-invariants by means of computer algebra tools, and that they can be used to determine termination probabilities. The paper concludes by characterizing a class of -- possibly infinite-state -- programs whose semantics is a rational GF encoding a discrete phase-type distribution.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا