ترغب بنشر مسار تعليمي؟ اضغط هنا

From Big-Step to Small-Step Semantics and Back with Interpreter Specialisation

102   0   0.0 ( 0 )
 نشر من قبل EPTCS
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We investigate representations of imperative programs as constrained Horn clauses. Starting from operational semantics transition rules, we proceed by writing interpreters as constrained Horn clause programs directly encoding the rules. We then specialise an interpreter with respect to a given source program to achieve a compilation of the source language to Horn clauses (an instance of the first Futamura projection). The process is described in detail for an interpreter for a subset of C, directly encoding the rules of big-step operational semantics for C. A similar translation based on small-step semantics could be carried out, but we show an approach to obtaining a small-step representation using a linear interpreter for big-step Horn clauses. This interpreter is again specialised to achieve the translation from big-step to small-step style. The linear small-step program can be transformed back to a big-step non-linear program using a third interpreter. A regular path expression is computed for the linear program using Tarjans algorithm, and this regular expression then guides an interpreter to compute a program path. The transformation is realised by specialisation of the path interpreter. In all of the transformation phases, we use an established partial evaluator and exploit standard logic program transformation to remove redundant data structures and arguments in predicates and rename predicates to make clear their link to statements in the original source program.



قيم البحث

اقرأ أيضاً

121 - Francesco Dagnino 2021
It is well-known that big-step semantics is not able to distinguish stuck and non-terminating computations. This is a strong limitation as it makes very difficult to reason about properties involving infinite computations, such as type soundness, whi ch cannot even be expressed. We show that this issue is only apparent: the distinction between stuck and diverging computations is implicit in any big-step semantics and it just needs to be uncovered. To achieve this goal, we develop a systematic study of big-step semantics: we introduce an abstract definition of what a big-step semantics is, we define a notion of computation by formalising the evaluation algorithm implicitly associated with any big-step semantics, and we show how to canonically extend a big-step semantics to characterise stuck and diverging computations. Building on these notions, we describe a general proof technique to show that a predicate is sound, that is, it prevents stuck computation, with respect to a big-step semantics. One needs to check three properties relating the predicate and the semantics and, if they hold, the predicate is sound. The extended semantics are essential to establish this meta-logical result, but are of no concerns to the user, who only needs to prove the three properties of the initial big-step semantics. Finally, we illustrate the technique by several examples, showing that it is applicable also in cases where subject reduction does not hold, hence the standard technique for small-step semantics cannot be used.
Structural operational semantic specifications come in different styles: small-step and big-step. A problem with the big-step style is that specifying divergence and abrupt termination gives rise to annoying duplication. We present a novel approach t o representing divergence and abrupt termination in big-step semantics using status flags. This avoids the duplication problem, and uses fewer rules and premises for representing divergence than previous approaches in the literature.
We propose a general proof technique to show that a predicate is sound, that is, prevents stuck computation, with respect to a big-step semantics. This result may look surprising, since in big-step semantics there is no difference between non-termina ting and stuck computations, hence soundness cannot even be expressed. The key idea is to define constructions yielding an extended version of a given arbitrary big-step semantics, where the difference is made explicit. The extended semantics are exploited in the meta-theory, notably they are necessary to show that the proof technique works. However, they remain transparent when using the proof technique, since it consists in checking three conditions on the original rules only, as we illustrate by several examples.
A reliable technique for deductive program verification should be proven sound with respect to the semantics of the programming language. For each different language, the construction of a separate soundness proof is often a laborious undertaking. In language-independent program verification, common aspects of computer programs are addressed to enable sound reasoning for all languages. In this work, we propose a solution for the sound reasoning about iteration and recursion based on the big-step operational semantics of any programming language. We give inductive proofs on the soundness and relative completeness of our reasoning technique. We illustrate the technique at simplified programming languages of the imperative and functional paradigms, with diverse features. We also mechanism all formal results in the Coq proof assistant.
157 - Oleg Kiselyov 2019
We reflect on programming with complicated effects, recalling an undeservingly forgotten alternative to monadic programming and checking to see how well it can actually work in modern functional languages. We adopt and argue the position of factoring an effectful program into a first-order effectful DSL with a rich, higher-order macro system. Not all programs can be thus factored. Although the approach is not general-purpose, it does admit interesting programs. The effectful DSL is likewise rather problem-specific and lacks general-purpose monadic composition, or even functions. On the upside, it expresses the problem elegantly, is simple to implement and reason about, and lends itself to non-standard interpretations such as code generation (compilation) and abstract interpretation. A specialized DSL is liable to be frequently extended; the experience with the tagless-final style of DSL embedding shown that the DSL evolution can be made painless, with the maximum code reuse. We illustrate the argument on a simple but representative example of a rather complicated effect -- non-determinism, including committed choice. Unexpectedly, it turns out we can write interesting non-deterministic programs in an ML-like language just as naturally and elegantly as in the functional-logic language Curry -- and not only run them but also statically analyze, optimize and compile. The richness of the Meta Language does, in reality, compensate for the simplicity of the effectful DSL. The key idea goes back to the origins of ML as the Meta Language for the Edinburgh LCF theorem prover. Instead of using ML to build theorems, we now build (DSL) programs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا