Do you want to publish a course? Click here

Normalization by gluing for free {lambda}-theories

75   0   0.0 ( 0 )
 Added by Jonathan Sterling
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

The connection between normalization by evaluation, logical predicates and semantic gluing constructions is a matter of folklore, worked out in varying degrees within the literature. In this note, we present an elementary version of the gluing technique which corresponds closely with both semantic normalization proofs and the syntactic normalization by evaluation.

rate research

Read More

We present two extensions of the LF Constructive Type Theory featuring monadic locks. A lock is a monadic type construct that captures the effect of an external call to an oracle. Such calls are the basic tool for gluing together diverse Type Theories and proof development environments. The oracle can be invoked either to check that a constraint holds or to provide a suitable witness. The systems are presented in the canonical style developed by the CMU School. The first system, CLLFP, is the canonical version of the system LLFP, presented earlier by the authors. The second system, CLLFP?, features the possibility of invoking the oracle to obtain a witness satisfying a given constraint. We discuss encodings of Fitch-Prawitz Set theory, call-by-value lambda-calculi, and systems of Light Linear Logic. Finally, we show how to use Fitch-Prawitz Set Theory to define a type system that types precisely the strongly normalizing terms.
200 - Giulio Manzonetto 2009
In this paper we briefly summarize the contents of Manzonettos PhD thesis which concerns denotational semantics and equational/order theories of the pure untyped lambda-calculus. The main research achievements include: (i) a general construction of lambda-models from reflexive objects in (possibly non-well-pointed) categories; (ii) a Stone-style representation theorem for combinatory algebras; (iii) a proof that no effective lambda-model can have lambda-beta or lambda-beta-eta as its equational theory (this can be seen as a partial answer to an open problem introduced by Honsell-Ronchi Della Rocca in 1984).
This paper explores two topics at once: the use of denotational semantics to bound the evaluation length of functional programs, and the semantics of strong (that is, possibly under abstractions) call-by-value evaluation. About the first, we analyze de Carvalhos seminal use of relational semantics for bounding the evaluation length of lambda-terms, starting from the presentation of the semantics as an intersection types system. We focus on the part of his work which is usually neglected in its many recent adaptations, despite being probably the conceptually deeper one: how to transfer the bounding power from the type system to the relational semantics itself. We dissect this result and re-understand it via the isolation of a simpler size representation property. About the second, we use relational semantics to develop a semantical study of strong call-by-value evaluation, which is both a delicate and neglected topic. We give a semantic characterization of terms normalizable with respect to strong evaluation, providing in particular the first result of adequacy with respect to strong call-by-value. Moreover, we extract bounds about strong evaluation from both the type systems and the relational semantics. Essentially, we use strong call-by-value to revisit de Carvalhos semantic bounds, and de Carvalhos technique to provide semantical foundations for strong call-by-value.
This is a survey of our program of perturbative quantization of gauge theories on manifolds with boundary compatible with cutting/pasting and with gauge symmetry treated by means of a cohomological resolution (Batalin-Vilkovisky) formalism. We also give two explicit quantum examples -- abelian BF theory and the Poisson sigma model. This exposition is based on a talk by P.M. at the ICMP 2015 in Santiago de Chile.
Lambda-calculi come with no fixed evaluation strategy. Different strategies may then be considered, and it is important that they satisfy some abstract rewriting property, such as factorization or normalization theorems. In this paper we provide simple proof techniques for these theorems. Our starting point is a revisitation of Takahashis technique to prove factorization for head reduction. Our technique is both simpler and more powerful, as it works in cases where Takahishis does not. We then pair factorization with two other abstract properties, defining emph{essential systems}, and show that normalization follows. Concretely, we apply the technique to four case studies, two classic ones, head and the leftmost-outermost reductions, and two less classic ones, non-deterministic weak call-by-value and least-level reductions.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا