ترغب بنشر مسار تعليمي؟ اضغط هنا

Transformations of CCP programs

182   0   0.0 ( 0 )
 نشر من قبل Sandro Etalle
 تاريخ النشر 2001
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce a transformation system for concurrent constraint programming (CCP). We define suitable applicability conditions for the transformations which guarantee that the input/output CCP semantics is preserved also when distinguishing deadlocked computations from successful ones and when considering intermediate results of (possibly) non-terminating computations. The system allows us to optimize CCP programs while preserving their intended meaning: In addition to the usual benefits that one has for sequential declarative languages, the transformation of concurrent programs can also lead to the elimination of communication channels and of synchronization points, to the transformation of non-deterministic computations into deterministic ones, and to the crucial saving of computational space. Furthermore, since the transformation system preserves the deadlock behavior of programs, it can be used for proving deadlock freeness of a given program wrt a class of queries. To this aim it is sometimes sufficient to apply our transformations and to specialize the resulting program wrt the given queries in such a way that the obtained program is trivially deadlock free.

قيم البحث

اقرأ أيضاً

79 - Robert Sison 2020
Proving only over source code that programs do not leak sensitive data leaves a gap between reasoning and reality that can only be filled by accounting for the behaviour of the compiler. Furthermore, software does not always have the luxury of limiti ng itself to single-threaded computation with resources statically dedicated to each user to ensure the confidentiality of their data. This results in mixed-sensitivity concurrent programs, which might reuse memory shared between their threads to hold data of different sensitivity levels at different times; for such programs, a compiler must preserve the value-dependent coordination of such mixed-sensitivity reuse despite the impact of concurrency. Here we demonstrate, using Isabelle/HOL, that it is feasible to verify that a compiler preserves noninterference, the strictest kind of confidentiality property, for mixed-sensitivity concurrent programs. First, we present notions of refinement that preserve a concurrent value-dependent notion of noninterference that we have designed to support such programs. As proving noninterference-preserving refinement can be considerably more complex than the standard refinements typically used to verify semantics -- preserving compilation, our notions include a decomposition principle that separates the semantics -- from the security-preservation concerns. Second, we demonstrate that these refinement notions are applicable to verified secure compilation, by exercising them on a single-pass compiler for mixed-sensitivity concurrent programs that synchronise using mutex locks, from a generic imperative language to a generic RISC-style assembly language. Finally, we execute our compiler on a nontrivial mixed-sensitivity concurrent program modelling a real-world use case, thus preserving its source-level noninterference properties down to an assembly-level model automatically. (See paper for complete abstract.)
Concurrent constraint programming (ccp) is a well-established model for concurrency that singles out the fundamental aspects of asynchronous systems whose agents (or processes) evolve by posting and querying (partial) information in a global medium. Bisimilarity is a standard behavioural equivalence in concurrency theory. However, only recently a well-behaved notion of bisimilarity for ccp, and a ccp partition refinement algorithm for deciding the strong version of this equivalence have been proposed. Weak bisimiliarity is a central behavioural equivalence in process calculi and it is obtained from the strong case by taking into account only the actions that are observable in the system. Typically, the standard partition refinement can also be used for deciding weak bisimilarity simply by using Milners reduction from weak to strong bisimilarity; a technique referred to as saturation. In this paper we demonstrate that, because of its involved labeled transitions, the above-mentioned saturation technique does not work for ccp. We give an alternative reduction from weak ccp bisimilarity to the strong one that allows us to use the ccp partition refinement algorithm for deciding this equivalence.
We propose a timed and soft extension of Concurrent Constraint Programming. The time extension is based on the hypothesis of bounded asynchrony: the computation takes a bounded period of time and is measured by a discrete global clock. Action prefixi ng is then considered as the syntactic marker which distinguishes a time instant from the next one. Supported by soft constraints instead of crisp ones, tell and ask agents are now equipped with a preference (or consistency) threshold which is used to determine their success or suspension. In the paper we provide a language to describe the agents behavior, together with its operational and denotational semantics, for which we also prove the compositionality and correctness properties. After presenting a semantics using maximal parallelism of actions, we also describe a version for their interleaving on a single processor (with maximal parallelism for time elapsing). Coordinating agents that need to take decisions both on preference values and time events may benefit from this language. To appear in Theory and Practice of Logic Programming (TPLP).
112 - Vincent Nys 2017
We extend a technique called Compiling Control. The technique transforms coroutining logic programs into logic programs that, when executed under the standard left-to-right selection rule (and not using any delay features) have the same computational behavior as the coroutining program. In recent work, we revised Compiling Control and reformulated it as an instance of Abstract Conjunctive Partial Deduction. This work was mostly focused on the program analysis performed in Compiling Control. In the current paper, we focus on the synthesis of the transformed program. Instead of synthesizing a new logic program, we synthesize a CHR(Prolog) program which mimics the coroutining program. The synthesis to CHR yields programs containing only simplification rules, which are particularly amenable to certain static analysis techniques. The programs are also more concise and readable and can be ported to CHR implementations embedded in other languages than Prolog.
115 - Allan Blanchard 2017
Frama-C is a software analysis framework that provides a common infrastructure and a common behavioral specification language to plugins that implement various static and dynamic analyses of C programs. Most plugins do not support concurrency. We hav e proposed Conc2Seq, a Frama-C plugin based on program transformation, capable to leverage the existing huge code base of plugins and to handle concurrent C programs. In this paper we formalize and sketch the proof of correctness of the program transformation principle behind Conc2Seq, and present an effort towards the full mechanization of both the formalization and proofs with the proof assistant Coq.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا