Do you want to publish a course? Click here

A Higher-Order Abstract Syntax Approach to the Verified Compilation of Functional Programs

233   0   0.0 ( 0 )
 Added by Yuting Wang
 Publication date 2017
and research's language is English
 Authors Yuting Wang




Ask ChatGPT about the research

We argue that the implementation and verification of compilers for functional programming languages are greatly simplified by employing a higher-order representation of syntax known as Higher-Order Abstract Syntax or HOAS. The underlying idea of HOAS is to use a meta-language that provides a built-in and logical treatment of binding related notions. By embedding the meta-language within a larger programming or reasoning framework, it is possible to absorb the treatment of binding structure in the object language into the meta-theory of the system, thereby greatly simplifying the overall implementation and reasoning processes. We develop the above argument in this thesis by presenting and demonstrating the effectiveness of an approach to the verified implementation of compiler transformations for functional programs that exploits HOAS. In this approach, transformations on functional programs are first articulated in the form of rule-based relational specifications. These specifications are rendered into programs in the language lambda Prolog. On the one hand, these programs serve directly as implementations. On the other hand, they can be used as input to the Abella system which allows us to prove properties about them and thereby about the implementations. Both lambda Prolog and Abella support the use of the HOAS approach. Thus, they constitute a framework that can be used to test out the benefits of the HOAS approach in verified compilation. We use them to implement and verify a compiler for a representative functional programming language that embodies the transformations that form the core of many compilers for such languages. In both the programming and the reasoning phases, we show how the use of the HOAS approach significantly simplifies the representation, manipulation, analysis and reasoning of binding structure.



rate research

Read More

79 - Robert Sison 2020
Proving only over source code that programs do not leak sensitive data leaves a gap between reasoning and reality that can only be filled by accounting for the behaviour of the compiler. Furthermore, software does not always have the luxury of limiting itself to single-threaded computation with resources statically dedicated to each user to ensure the confidentiality of their data. This results in mixed-sensitivity concurrent programs, which might reuse memory shared between their threads to hold data of different sensitivity levels at different times; for such programs, a compiler must preserve the value-dependent coordination of such mixed-sensitivity reuse despite the impact of concurrency. Here we demonstrate, using Isabelle/HOL, that it is feasible to verify that a compiler preserves noninterference, the strictest kind of confidentiality property, for mixed-sensitivity concurrent programs. First, we present notions of refinement that preserve a concurrent value-dependent notion of noninterference that we have designed to support such programs. As proving noninterference-preserving refinement can be considerably more complex than the standard refinements typically used to verify semantics -- preserving compilation, our notions include a decomposition principle that separates the semantics -- from the security-preservation concerns. Second, we demonstrate that these refinement notions are applicable to verified secure compilation, by exercising them on a single-pass compiler for mixed-sensitivity concurrent programs that synchronise using mutex locks, from a generic imperative language to a generic RISC-style assembly language. Finally, we execute our compiler on a nontrivial mixed-sensitivity concurrent program modelling a real-world use case, thus preserving its source-level noninterference properties down to an assembly-level model automatically. (See paper for complete abstract.)
Abstract interpretation is a well-established technique for performing static analyses of logic programs. However, choosing the abstract domain, widening, fixpoint, etc. that provides the best precision-cost trade-off remains an open problem. This is in a good part because of the challenges involved in measuring and comparing the precision of different analyses. We propose a new approach for measuring such precision, based on defining distances in abstract domains and extending them to distances between whole analyses of a given program, thus allowing comparing precision across different analyses. We survey and extend existing proposals for distances and metrics in lattices or abstract domains, and we propose metrics for some common domains used in logic program analysis, as well as extensions of those metrics to the space of whole program analysis. We implement those metrics within the CiaoPP framework and apply them to measure the precision of different analyses over both benchmarks and a realistic program.
We present a self-certifying compiler for the COGENT systems language. COGENT is a restricted, polymorphic, higher-order, and purely functional language with linear types and without the need for a trusted runtime or garbage collector. It compiles to efficient C code that is designed to interoperate with existing C functions. The language is suited for layered systems code with minimal sharing such as file systems or network protocol control code. For a well-typed COGENT program, the compiler produces C code, a high-level shallow embedding of its semantics in Isabelle/HOL, and a proof that the C code correctly implements this embedding. The aim is for proof engineers to reason about the full semantics of real-world systems code productively and equationally, while retaining the interoperability and leanness of C. We describe the formal verification stages of the compiler, which include automated formal refinement calculi, a switch from imperative update semantics to functional value semantics formally justified by the linear type system, and a number of standard compiler phases such as type checking and monomorphisation. The compiler certificate is a series of language-level meta proofs and per-program translation validation phases, combined into one coherent top-level theorem in Isabelle/HOL.
The functional correspondence is a manual derivation technique transforming higher-order evaluators into the semantically equivalent abstract machines. The transformation consists of two well-known program transformations: translation to continuation-passing style that uncovers the control flow of the evaluator and Reynoldss defunctionalization that generates a first-order transition function. Ever since the transformation was first described by Danvy et al. it has found numerous applications in connecting known evaluators and abstract machines, but also in discovering new abstract machines for a variety of $lambda$-calculi as well as for logic-programming, imperative and object-oriented languages. We present an algorithm that automates the functional correspondence. The algorithm accepts an evaluator written in a dedicated minimal functional meta-language and it first transforms it to administrative normal form, which facilitates program analysis, before performing selective translation to continuation-passing style, and selective defunctionalization. The two selective transformations are driven by a control-flow analysis that is computed by an abstract interpreter obtained using the abstracting abstract machines methodology, which makes it possible to transform only the desired parts of the evaluator. The article is accompanied by an implementation of the algorithm in the form of a command-line tool that allows for automatic transformation of an evaluator embedded in a Racket source file and gives fine-grained control over the resulting machine.
We implement extraction of Coq programs to functional languages based on MetaCoqs certified erasure. We extend the MetaCoq erasure output language with typing information and use it as an intermediate representation, which we call $lambda^T_square$. We complement the extraction functionality with a full pipeline that includes several standard transformations (eta-expansion, inlining, etc) implemented in a proof-generating manner along with a verified optimisation pass removing unused arguments. We prove the pass correct wrt. a conventional call-by-value operational semantics of functional languages. From the optimised $lambda^T_square$ representation, we obtain code in two functional smart contract languages (Liquidity and CameLIGO), the functional language Elm, and a subset of the multi-paradigm language for systems programming Rust. Rust is currently gaining popularity as a language for smart contracts, and we demonstrate how our extraction can be used to extract smart contract code for the Concordium network. The development is done in the context of the ConCert framework that enables smart contract verification. We contribute with two verified real-world smart contracts (boardroom voting and escrow), which we use, among other examples, to exemplify the applicability of the pipeline. In addition, we develop a verified web application and extract it to fully functional Elm code. In total, this gives us a way to write dependently typed programs in Coq, verify, and then extract them to several target languages while retaining a small trusted computing base of only MetaCoq and the pretty-printers into these languages.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا