ﻻ يوجد ملخص باللغة العربية
We explore space improvements in LRP, a polymorphically typed call-by-need functional core language. A relaxed space measure is chosen for the maximal size usage during an evaluation. It abstracts from the details of the implementation via abstract machines, but it takes garbage collection into account and thus can be seen as a realistic approximation of space usage. The results are: a context lemma for space improving translations and for space equivalences, all but one reduction rule of the calculus are shown to be space improvements, and for the exceptional one we show bounds on the space increase. Several further program transformations are shown to be space improvements or space equivalences in particular the translation into machine expressions is a space equivalence. We also classify certain space-worsening transformations as space-leaks or as space-safe. These results are a step forward in making predictions about the change in runtime space behavior of optimizing transformations in call-by-need functional languages.
While modern software development heavily uses versioned packages, programming languages rarely support the concept
We present a self-certifying compiler for the COGENT systems language. COGENT is a restricted, polymorphic, higher-order, and purely functional language with linear types and without the need for a trusted runtime or garbage collector. It compiles to
We present FunTAL, the first multi-language system to formalize safe interoperability between a high-level functional language and low-level assembly code while supporting compositional reasoning about the mix. A central challenge in developing such
The use of adaptive workflow management for in situ visualization and analysis has been a growing trend in large-scale scientific simulations. However, coordinating adaptive workflows with traditional procedural programming languages can be difficult
How does one compile derivatives of tensor programs, such that the resulting code is purely functional (hence easier to optimize and parallelize) and provably efficient relative to the original program? We show that naively differentiating tensor cod