Do you want to publish a course? Click here

Selectively-Amortized Resource Bounding

152   0   0.0 ( 0 )
 Added by Tianhan Lu
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We consider the problem of automatically proving resource bounds. That is, we study how to prove that an integer-valued resource variable is bounded by a given program expression. Automatic resource-bound analysis has recently received significant attention because of a number of important applications (e.g., detecting performance bugs, preventing algorithmic-complexity attacks, identifying side-channel vulnerabilities), where the focus has often been on developing precise amortized reasoning techniques to infer the most exact resource usage. While such innovations remain critical, we observe that fully precise amortization is not always necessary to prove a bound of interest. And in fact, by amortizing emph{selectively}, the needed supporting invariants can be simpler, making the invariant inference task more feasible and predictable. We present a framework for selectively-amortized analysis that mixes worst-case and amortized reasoning via a property decomposition and a program transformation. We show that proving bounds in any such decomposition yields a sound resource bound in the original program, and we give an algorithm for selecting a reasonable decomposition.



rate research

Read More

A typical way of analyzing the time complexity of functional programs is to extract a recurrence expressing the running time of the program in terms of the size of its input, and then to solve the recurrence to obtain a big-O bound. For recurrence extraction to be compositional, it is also necessary to extract recurrences for the size of outputs of helper functions. Previous work has developed techniques for using logical relations to state a formal correctness theorem for a general recurrence extraction translation: a program is bounded by a recurrence when the operational cost is bounded by the extracted cost, and the output value is bounded, according to a value bounding relation defined by induction on types, by the extracted size. This previous work supports higher-order functions by viewing recurrences as programs in a lambda-calculus, or as mathematical entities in a denotational semantics thereof. In this paper, we extend these techniques to support amortized analysis, where costs are rearranged from one portion of a program to another to achieve more precise bounds. We give an intermediate language in which programs can be annotated according to the bankers method of amortized analysis; this language has an affine type system to ensure credits are not spent more than once. We give a recurrence extraction translation of this language into a recurrence language, a simply-typed lambda-calculus with a cost type, and state and prove a bounding logical relation expressing the correctness of this translation. The recurrence language has a denotational semantics in preorders, and we use this semantics to solve recurrences, e.g analyzing binary counters and splay trees.
64 - Luca Bortolussi 2015
In this paper we present CARMA, a language recently defined to support specification and analysis of collective adaptive systems. CARMA is a stochastic process algebra equipped with linguistic constructs specifically developed for modelling and programming systems that can operate in open-ended and unpredictable environments. This class of systems is typically composed of a huge number of interacting agents that dynamically adjust and combine their behaviour to achieve specific goals. A CARMA model, termed a collective, consists of a set of components, each of which exhibits a set of attributes. To model dynamic aggregations, which are sometimes referred to as ensembles, CARMA provides communication primitives that are based on predicates over the exhibited attributes. These predicates are used to select the participants in a communication. Two communication mechanisms are provided in the CARMA language: multicast-based and unicast-based. In this paper, we first introduce the basic principles of CARMA and then we show how our language can be used to support specification with a simple but illustrative example of a socio-technical collective adaptive system.
We investigate program equivalence for linear higher-order(sequential) languages endowed with primitives for computational effects. More specifically, we study operationally-based notions of program equivalence for a linear $lambda$-calculus with explicit copying and algebraic effects emph{`a la} Plotkin and Power. Such a calculus makes explicit the interaction between copying and linearity, which are intensional aspects of computation, with effects, which are, instead, emph{extensional}. We review some of the notions of equivalences for linear calculi proposed in the literature and show their limitations when applied to effectful calculi where copying is a first-class citizen. We then introduce resource transition systems, namely transition systems whose states are built over tuples of programs representing the available resources, as an operational semantics accounting for both intensional and extensional interactive behaviors of programs. Our main result is a sound and complete characterization of contextual equivalence as trace equivalence defined on top of resource transition systems.
Many virtual machines exist for sensor nodes with only a few KB RAM and tens to a few hundred KB flash memory. They pack an impressive set of features, but suffer from a slowdown of one to two orders of magnitude compared to optimised native code, reducing throughput and increasing power consumption. Compiling bytecode to native code to improve performance has been studied extensively for larger devices, but the restricted resources on sensor nodes mean most modern techniques cannot be applied. Simply replacing bytecode instructions with predefined sequences of native instructions is known to improve performance, but produces code several times larger than the optimised C equivalent, limiting the size of programmes that can fit onto a device. This paper identifies the major sources of overhead resulting from this basic approach, and presents optimisations to remove most of the remaining performance overhead, and over half the size overhead, reducing them to 69% and 91% respectively. While this increases the size of the VM, the break-even point at which this fixed cost is compensated for is well within the range of memory available on a sensor device, allowing us to both improve performance and load more code on a device.
The variational autoencoder (VAE) is a popular model for density estimation and representation learning. Canonically, the variational principle suggests to prefer an expressive inference model so that the variational approximation is accurate. However, it is often overlooked that an overly-expressive inference model can be detrimental to the test set performance of both the amortized posterior approximator and, more importantly, the generative density estimator. In this paper, we leverage the fact that VAEs rely on amortized inference and propose techniques for amortized inference regularization (AIR) that control the smoothness of the inference model. We demonstrate that, by applying AIR, it is possible to improve VAE generalization on both inference and generative performance. Our paper challenges the belief that amortized inference is simply a mechanism for approximating maximum likelihood training and illustrates that regularization of the amortization family provides a new direction for understanding and improving generalization in VAEs.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا