ترغب بنشر مسار تعليمي؟ اضغط هنا

On reduction and normalization in the computational core

109   0   0.0 ( 0 )
 نشر من قبل Riccardo Treglia
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the reduction in a lambda-calculus derived from Moggis computational one, that we call the computational core. The reduction relation consists of rules obtained by orienting three monadic laws. Such laws, in particular associativity and identity, introduce intricacies in the operational analysis. We investigate the central notions of returning a value versus having a normal form, and address the question of normalizing strategies. Our analysis relies on factorization results.



قيم البحث

اقرأ أيضاً

This paper explores two topics at once: the use of denotational semantics to bound the evaluation length of functional programs, and the semantics of strong (that is, possibly under abstractions) call-by-value evaluation. About the first, we analyz e de Carvalhos seminal use of relational semantics for bounding the evaluation length of lambda-terms, starting from the presentation of the semantics as an intersection types system. We focus on the part of his work which is usually neglected in its many recent adaptations, despite being probably the conceptually deeper one: how to transfer the bounding power from the type system to the relational semantics itself. We dissect this result and re-understand it via the isolation of a simpler size representation property. About the second, we use relational semantics to develop a semantical study of strong call-by-value evaluation, which is both a delicate and neglected topic. We give a semantic characterization of terms normalizable with respect to strong evaluation, providing in particular the first result of adequacy with respect to strong call-by-value. Moreover, we extract bounds about strong evaluation from both the type systems and the relational semantics. Essentially, we use strong call-by-value to revisit de Carvalhos semantic bounds, and de Carvalhos technique to provide semantical foundations for strong call-by-value.
Normalization fails in type theory with an impredicative universe of propositions and a proof-irrelevant propositional equality. The counterexample to normalization is adapted from Girards counterexample against normalization of System F equipped wit h a decider for type equality. It refutes Werners normalization conjecture [LMCS 2008].
The proceedings consist of a keynote paper by Alberto followed by 6 invited papers written by Lorenzo Clemente (U. Warsaw), Alain Finkel (U. Paris-Saclay), John Gallagher (Roskilde U. and IMDEA Software Institute) et al., Neil Jones (U. Copenhagen) e t al., Michael Leuschel (Heinrich-Heine U.) and Maurizio Proietti (IASI-CNR) et al.. These invited papers are followed by 4 regular papers accepted at VPT 2020 and the papers of HCVS 2020 which consist of three contributed papers and an invited paper on the third competition of solvers for Constrained Horn Clauses. In addition, the abstracts (in HTML format) of 3 invited talks at VPT 2020 by Andrzej Skowron (U. Warsaw), Sophie Renault (EPO) and Moa Johansson (Chalmers U.), are included.
Lambda-calculi come with no fixed evaluation strategy. Different strategies may then be considered, and it is important that they satisfy some abstract rewriting property, such as factorization or normalization theorems. In this paper we provide si mple proof techniques for these theorems. Our starting point is a revisitation of Takahashis technique to prove factorization for head reduction. Our technique is both simpler and more powerful, as it works in cases where Takahishis does not. We then pair factorization with two other abstract properties, defining emph{essential systems}, and show that normalization follows. Concretely, we apply the technique to four case studies, two classic ones, head and the leftmost-outermost reductions, and two less classic ones, non-deterministic weak call-by-value and least-level reductions.
145 - Jakob Rehof 2015
This volume contains a final and revised selection of papers presented at the Seventh Workshop on Intersection Types and Related Systems (ITRS 2014), held in Vienna (Austria) on July 18th, affiliated with TLCA 2014, Typed Lambda Calculi and Applicati ons (held jointly with RTA, Rewriting Techniques and Applications) as part of FLoC and the Vienna Summer of Logic (VSL) 2014. Intersection types have been introduced in the late 1970s as a language for describing properties of lambda calculus which were not captured by all previous type systems. They provided the first characterisation of strongly normalising lambda terms and have become a powerful syntactic and semantic tool for analysing various normalisation properties as well as lambda models. Over the years the scope of research on intersection types has broadened. Recently, there have been a number of breakthroughs in the use of intersection types and similar technology for practical purposes such as program analysis, verification and concurrency, and program synthesis. The aim of the ITRS workshop series is to bring together researchers working on both the theory and practical applications of systems based on intersection types and related approaches (e.g., union types, refinement types, behavioral types).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا