Do you want to publish a course? Click here

Effects Without Monads: Non-determinism -- Back to the Meta Language

158   0   0.0 ( 0 )
 Added by EPTCS
 Publication date 2019
and research's language is English
 Authors Oleg Kiselyov




Ask ChatGPT about the research

We reflect on programming with complicated effects, recalling an undeservingly forgotten alternative to monadic programming and checking to see how well it can actually work in modern functional languages. We adopt and argue the position of factoring an effectful program into a first-order effectful DSL with a rich, higher-order macro system. Not all programs can be thus factored. Although the approach is not general-purpose, it does admit interesting programs. The effectful DSL is likewise rather problem-specific and lacks general-purpose monadic composition, or even functions. On the upside, it expresses the problem elegantly, is simple to implement and reason about, and lends itself to non-standard interpretations such as code generation (compilation) and abstract interpretation. A specialized DSL is liable to be frequently extended; the experience with the tagless-final style of DSL embedding shown that the DSL evolution can be made painless, with the maximum code reuse. We illustrate the argument on a simple but representative example of a rather complicated effect -- non-determinism, including committed choice. Unexpectedly, it turns out we can write interesting non-deterministic programs in an ML-like language just as naturally and elegantly as in the functional-logic language Curry -- and not only run them but also statically analyze, optimize and compile. The richness of the Meta Language does, in reality, compensate for the simplicity of the effectful DSL. The key idea goes back to the origins of ML as the Meta Language for the Edinburgh LCF theorem prover. Instead of using ML to build theorems, we now build (DSL) programs.

rate research

Read More

We investigate representations of imperative programs as constrained Horn clauses. Starting from operational semantics transition rules, we proceed by writing interpreters as constrained Horn clause programs directly encoding the rules. We then specialise an interpreter with respect to a given source program to achieve a compilation of the source language to Horn clauses (an instance of the first Futamura projection). The process is described in detail for an interpreter for a subset of C, directly encoding the rules of big-step operational semantics for C. A similar translation based on small-step semantics could be carried out, but we show an approach to obtaining a small-step representation using a linear interpreter for big-step Horn clauses. This interpreter is again specialised to achieve the translation from big-step to small-step style. The linear small-step program can be transformed back to a big-step non-linear program using a third interpreter. A regular path expression is computed for the linear program using Tarjans algorithm, and this regular expression then guides an interpreter to compute a program path. The transformation is realised by specialisation of the path interpreter. In all of the transformation phases, we use an established partial evaluator and exploit standard logic program transformation to remove redundant data structures and arguments in predicates and rename predicates to make clear their link to statements in the original source program.
This paper is a contribution to the search for efficient and high-level mathematical tools to specify and reason about (abstract) programming languages or calculi. Generalising the reduction monads of Ahrens et al., we introduce transition monads, thus covering new applications such as lambda-bar-mu-calculus, pi-calculus, Positive GSOS specifications, differential lambda-calculus, and the big-step, simply-typed, call-by-value lambda-calculus. Moreover, we design a suitable notion of signature for transition monads.
121 - Francesco Dagnino 2021
It is well-known that big-step semantics is not able to distinguish stuck and non-terminating computations. This is a strong limitation as it makes very difficult to reason about properties involving infinite computations, such as type soundness, which cannot even be expressed. We show that this issue is only apparent: the distinction between stuck and diverging computations is implicit in any big-step semantics and it just needs to be uncovered. To achieve this goal, we develop a systematic study of big-step semantics: we introduce an abstract definition of what a big-step semantics is, we define a notion of computation by formalising the evaluation algorithm implicitly associated with any big-step semantics, and we show how to canonically extend a big-step semantics to characterise stuck and diverging computations. Building on these notions, we describe a general proof technique to show that a predicate is sound, that is, it prevents stuck computation, with respect to a big-step semantics. One needs to check three properties relating the predicate and the semantics and, if they hold, the predicate is sound. The extended semantics are essential to establish this meta-logical result, but are of no concerns to the user, who only needs to prove the three properties of the initial big-step semantics. Finally, we illustrate the technique by several examples, showing that it is applicable also in cases where subject reduction does not hold, hence the standard technique for small-step semantics cannot be used.
We introduce a new application for inductive logic programming: learning the semantics of programming languages from example evaluations. In this short paper, we explored a simplified task in this domain using the Metagol meta-interpretive learning system. We highlighted the challenging aspects of this scenario, including abstracting over function symbols, nonterminating examples, and learning non-observed predicates, and proposed extensions to Metagol helpful for overcoming these challenges, which may prove useful in other domains.
We present a self-certifying compiler for the COGENT systems language. COGENT is a restricted, polymorphic, higher-order, and purely functional language with linear types and without the need for a trusted runtime or garbage collector. It compiles to efficient C code that is designed to interoperate with existing C functions. The language is suited for layered systems code with minimal sharing such as file systems or network protocol control code. For a well-typed COGENT program, the compiler produces C code, a high-level shallow embedding of its semantics in Isabelle/HOL, and a proof that the C code correctly implements this embedding. The aim is for proof engineers to reason about the full semantics of real-world systems code productively and equationally, while retaining the interoperability and leanness of C. We describe the formal verification stages of the compiler, which include automated formal refinement calculi, a switch from imperative update semantics to functional value semantics formally justified by the linear type system, and a number of standard compiler phases such as type checking and monomorphisation. The compiler certificate is a series of language-level meta proofs and per-program translation validation phases, combined into one coherent top-level theorem in Isabelle/HOL.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا