Do you want to publish a course? Click here

Accurate Programming: Thinking about programs in terms of properties

138   0   0.0 ( 0 )
 Added by EPTCS
 Publication date 2011
and research's language is English
 Authors Walid Taha




Ask ChatGPT about the research

Accurate programming is a practical approach to producing high quality programs. It combines ideas from test-automation, test-driven development, agile programming, and other state of the art software development methods. In addition to building on approaches that have proven effective in practice, it emphasizes concepts that help programmers sharpen their understanding of both the problems they are solving and the solutions they come up with. This is achieved by encouraging programmers to think about programs in terms of properties.



rate research

Read More

Class invariants -- consistency constraints preserved by every operation on objects of a given type -- are fundamental to building and understanding object-oriented programs. They should also be a key help in verifying them, but turn out instead to raise major verification challenges which have prompted a significant literature with, until now, no widely accepted solution. The present work introduces a general proof rule meant to address invariant-related issues and allow verification tools benefit from invariants. It first clarifies the notion of invariant and identify the three problems: callbacks, furtive access and reference leak. As an example, the 2016 Ethereum DAO bug, in which $50 million were stolen, resulted from a callback invalidating an invariant. The discussion starts with a Simple Model and an associated proof rule, demonstrating its soundness. It then removes one by one the three assumptions of the Simple Model, each removal bringing up one of the three issues, and introduces the corresponding adaptation to the proof rule. The final version of the rule can tackle tricky examples, including challenge problems listed in the literature.
Programming by Example (PBE) is a program synthesis paradigm in which the synthesizer creates a program that matches a set of given examples. In many applications of such synthesis (e.g., program repair or reverse engineering), we are to reconstruct a program that is close to a specific target program, not merely to produce some program that satisfies the seen examples. In such settings, we wish that the synthesized program generalizes well, i.e., has as few errors as possible on the unobserved examples capturing the target function behavior. In this paper, we propose the first framework (called SynGuar) for PBE synthesizers that guarantees to achieve low generalization error with high probability. Our main contribution is a procedure to dynamically calculate how many additional examples suffice to theoretically guarantee generalization. We show how our techniques can be used in 2 well-known synthesis approaches: PROSE and STUN (synthesis through unification), for common string-manipulation program benchmarks. We find that often a few hundred examples suffice to provably bound generalization error below $5%$ with high ($geq 98%$) probability on these benchmarks. Further, we confirm this empirically: SynGuar significantly improves the accuracy of existing synthesizers in generating the right target programs. But with fewer examples chosen arbitrarily, the same baseline synthesizers (without SynGuar) overfit and lose accuracy.
A software analysis is a computer program that takes some representation of a software product as input and produces some useful information about that product as output. A software product line encompasses emph{many} software product variants, and thus existing analyses can be applied to each of the product variations individually, but not to the entire product line as a whole. Enumerating all product variants and analyzing them one by one is usually intractable due to the combinatorial explosion of the number of product variants with respect to product line features. Several software analyses (e.g., type checkers, model checkers, data flow analyses) have been redesigned/re-implemented to support variability. This usually requires a lot of time and effort, and the variability-aware version of the analysis might have new errors/bugs that do not exist in the original one. Given an analysis program written in a functional language based on PCF, in this paper we present two approaches to transforming (lifting) it into a semantically equivalent variability-aware analysis. A light-weight approach (referred to as emph{shallow lifting}) wraps the analysis program into a variability-aware version, exploring all combinations of its input arguments. Deep lifting, on the other hand, is a program rewriting mechanism where the syntactic constructs of the input program are rewritten into their variability-aware counterparts. Compositionally this results in an efficient program semantically equivalent to the input program, modulo variability. We present the correctness criteria for functional program lifting, together with correctness proof sketches of our program transformations. We evaluate our approach on a set of program analyses applied to the BusyBox C-language product line.
112 - Jason Koenig 2016
Algorithmic and data refinement are well studied topics that provide a mathematically rigorous approach to gradually introducing details in the implementation of software. Program refinements are performed in the context of some programming language, but mainstream languages lack features for recording the sequence of refinement steps in the program text. To experiment with the combination of refinement, automated verification, and language design, refinement features have been added to the verification-aware programming language Dafny. This paper describes those features and reflects on some initial usage thereof.
We present an efficient and expressive tool for the instrumentation of Java programs at the bytecode-level. BISM (Bytecode-Level Instrumentation for Software Monitoring) is a light-weight Java bytecode instrumentation tool that features an expressive high-level control-flow-aware instrumentation language. The language is inspired by the aspect-oriented programming paradigm in modularizing instrumentation into separate transformers, that encapsulate joinpoint selection and advice inlining. BISM allows capturing joinpoints ranging from bytecode instructions to methods execution and provides comprehensive static and dynamic context information. It runs in two instrumentation modes: build-time and load-time. BISM also provides a mechanism to compose transformers and automatically detect their collision in the base program. Transformers in a composition can control the visibility of their advice and other instructions from the base program. We show several example applications for BISM and demonstrate its effectiveness using three experiments: a security scenario, a financial transaction system, and a general runtime verification case. The results show that BISM instrumentation incurs low runtime and memory overheads.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا