Do you want to publish a course? Click here

Denotational recurrence extraction for amortized analysis

67   0   0.0 ( 0 )
 Added by Norman Danner
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

A typical way of analyzing the time complexity of functional programs is to extract a recurrence expressing the running time of the program in terms of the size of its input, and then to solve the recurrence to obtain a big-O bound. For recurrence extraction to be compositional, it is also necessary to extract recurrences for the size of outputs of helper functions. Previous work has developed techniques for using logical relations to state a formal correctness theorem for a general recurrence extraction translation: a program is bounded by a recurrence when the operational cost is bounded by the extracted cost, and the output value is bounded, according to a value bounding relation defined by induction on types, by the extracted size. This previous work supports higher-order functions by viewing recurrences as programs in a lambda-calculus, or as mathematical entities in a denotational semantics thereof. In this paper, we extend these techniques to support amortized analysis, where costs are rearranged from one portion of a program to another to achieve more precise bounds. We give an intermediate language in which programs can be annotated according to the bankers method of amortized analysis; this language has an affine type system to ensure credits are not spent more than once. We give a recurrence extraction translation of this language into a recurrence language, a simply-typed lambda-calculus with a cost type, and state and prove a bounding logical relation expressing the correctness of this translation. The recurrence language has a denotational semantics in preorders, and we use this semantics to solve recurrences, e.g analyzing binary counters and splay trees.



rate research

Read More

A standard informal method for analyzing the asymptotic complexity of a program is to extract a recurrence that describes its cost in terms of the size of its input, and then to compute a closed-form upper bound on that recurrence. We give a formal account of that method for functional programs in a higher-order language with let-polymorphism The method consists of two phases. In the first phase, a monadic translation is performed to extract a cost-annotated version of the original program. In the second phase, the extracted program is interpreted in a model. The key feature of this second phase is that different models describe different notions of size. This plays out specifically for values of inductive type, where different notions of size may be appropriate depending on the analysis, and for polymorphic functions, where we show that the notion of size for a polymorphic function can be described formally as the data that is common to the notions of size of its instances. We give several examples of different models that formally justify various informal cost analyses to show the applicability of our approach.
170 - Marco Comini 2021
PROMELA (Process Meta Language) is a high-level specification language designed for modeling interactions in distributed systems. PROMELA is used as the input language for the model checker SPIN (Simple Promela INterpreter). The main characteristics of PROMELA are non-determinism, process communication through synchronous as well as asynchronous channels, and the possibility to dynamically create instances of processes. In this paper, we introduce a bottom-up, fixpoint semantics that aims to model the behavior of PROMELA programs. This work is the first step towards a more ambitious goal where analysis and verification techniques based on abstract interpretation would be defined on top of such semantics.
We consider the problem of automatically proving resource bounds. That is, we study how to prove that an integer-valued resource variable is bounded by a given program expression. Automatic resource-bound analysis has recently received significant attention because of a number of important applications (e.g., detecting performance bugs, preventing algorithmic-complexity attacks, identifying side-channel vulnerabilities), where the focus has often been on developing precise amortized reasoning techniques to infer the most exact resource usage. While such innovations remain critical, we observe that fully precise amortization is not always necessary to prove a bound of interest. And in fact, by amortizing emph{selectively}, the needed supporting invariants can be simpler, making the invariant inference task more feasible and predictable. We present a framework for selectively-amortized analysis that mixes worst-case and amortized reasoning via a property decomposition and a program transformation. We show that proving bounds in any such decomposition yields a sound resource bound in the original program, and we give an algorithm for selecting a reasonable decomposition.
We present a modular semantic account of Bayesian inference algorithms for probabilistic programming languages, as used in data science and machine learning. Sophisticated inference algorithms are often explained in terms of composition of smaller parts. However, neither their theoretical justification nor their implementation reflects this modularity. We show how to conceptualise and analyse such inference algorithms as manipulating intermediate representations of probabilistic programs using higher-order functions and inductive types, and their denotational semantics. Semantic accounts of continuous distributions use measurable spaces. However, our use of higher-order functions presents a substantial technical difficulty: it is impossible to define a measurable space structure over the collection of measurable functions between arbitrary measurable spaces that is compatible with standard operations on those functions, such as function application. We overcome this difficulty using quasi-Borel spaces, a recently proposed mathematical structure that supports both function spaces and continuous distributions. We define a class of semantic structures for representing probabilistic programs, and semantic validity criteria for transformations of these representations in terms of distribution preservation. We develop a collection of building blocks for composing representations. We use these building blocks to validate common inference algorithms such as Sequential Monte Carlo and Markov Chain Monte Carlo. To emphasize the connection between the semantic manipulation and its traditional measure theoretic origins, we use Kocks synthetic measure theory. We demonstrate its usefulness by proving a quasi-Borel counterpart to the Metropolis-Hastings-Green theorem.
In this paper, we study the phenomenon that instruction sequences are split into fragments which somehow produce a joint behaviour. In order to bring this phenomenon better into the picture, we formalize a simple mechanism by which several instruction sequence fragments can produce a joint behaviour. We also show that, even in the case of this simple mechanism, it is a non-trivial matter to explain by means of a translation into a single instruction sequence what takes place on execution of a collection of instruction sequence fragments.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا