Do you want to publish a course? Click here

Verified Secure Compilation for Mixed-Sensitivity Concurrent Programs

80   0   0.0 ( 0 )
 Added by Robert Sison
 Publication date 2020
and research's language is English
 Authors Robert Sison




Ask ChatGPT about the research

Proving only over source code that programs do not leak sensitive data leaves a gap between reasoning and reality that can only be filled by accounting for the behaviour of the compiler. Furthermore, software does not always have the luxury of limiting itself to single-threaded computation with resources statically dedicated to each user to ensure the confidentiality of their data. This results in mixed-sensitivity concurrent programs, which might reuse memory shared between their threads to hold data of different sensitivity levels at different times; for such programs, a compiler must preserve the value-dependent coordination of such mixed-sensitivity reuse despite the impact of concurrency. Here we demonstrate, using Isabelle/HOL, that it is feasible to verify that a compiler preserves noninterference, the strictest kind of confidentiality property, for mixed-sensitivity concurrent programs. First, we present notions of refinement that preserve a concurrent value-dependent notion of noninterference that we have designed to support such programs. As proving noninterference-preserving refinement can be considerably more complex than the standard refinements typically used to verify semantics -- preserving compilation, our notions include a decomposition principle that separates the semantics -- from the security-preservation concerns. Second, we demonstrate that these refinement notions are applicable to verified secure compilation, by exercising them on a single-pass compiler for mixed-sensitivity concurrent programs that synchronise using mutex locks, from a generic imperative language to a generic RISC-style assembly language. Finally, we execute our compiler on a nontrivial mixed-sensitivity concurrent program modelling a real-world use case, thus preserving its source-level noninterference properties down to an assembly-level model automatically. (See paper for complete abstract.)



rate research

Read More

Compiler correctness is, in its simplest form, defined as the inclusion of the set of traces of the compiled program into the set of traces of the original program, which is equivalent to the preservation of all trace properties. Here traces collect, for instance, the externally observable events of each execution. This definition requires, however, the set of traces of the source and target languages to be exactly the same, which is not the case when the languages are far apart or when observations are fine-grained. To overcome this issue, we study a generalized compiler correctness definition, which uses source and target traces drawn from potentially different sets and connected by an arbitrary relation. We set out to understand what guarantees this generalized compiler correctness definition gives us when instantiated with a non-trivial relation on traces. When this trace relation is not equality, it is no longer possible to preserve the trace properties of the source program unchanged. Instead, we provide a generic characterization of the target trace property ensured by correctly compiling a program that satisfies a given source property, and dually, of the source trace property one is required to show in order to obtain a certain target property for the compiled code. We show that this view on compiler correctness can naturally account for undefined behavior, resource exhaustion, different source and target values, side-channels, and various abstraction mismatches. Finally, we show that the same generalization also applies to many secure compilation definitions, which characterize the protection of a compiled program against linked adversarial code.
232 - Yuting Wang 2017
We argue that the implementation and verification of compilers for functional programming languages are greatly simplified by employing a higher-order representation of syntax known as Higher-Order Abstract Syntax or HOAS. The underlying idea of HOAS is to use a meta-language that provides a built-in and logical treatment of binding related notions. By embedding the meta-language within a larger programming or reasoning framework, it is possible to absorb the treatment of binding structure in the object language into the meta-theory of the system, thereby greatly simplifying the overall implementation and reasoning processes. We develop the above argument in this thesis by presenting and demonstrating the effectiveness of an approach to the verified implementation of compiler transformations for functional programs that exploits HOAS. In this approach, transformations on functional programs are first articulated in the form of rule-based relational specifications. These specifications are rendered into programs in the language lambda Prolog. On the one hand, these programs serve directly as implementations. On the other hand, they can be used as input to the Abella system which allows us to prove properties about them and thereby about the implementations. Both lambda Prolog and Abella support the use of the HOAS approach. Thus, they constitute a framework that can be used to test out the benefits of the HOAS approach in verified compilation. We use them to implement and verify a compiler for a representative functional programming language that embodies the transformations that form the core of many compilers for such languages. In both the programming and the reasoning phases, we show how the use of the HOAS approach significantly simplifies the representation, manipulation, analysis and reasoning of binding structure.
We present VOQC, the first fully verified optimizer for quantum circuits, written using the Coq proof assistant. Quantum circuits are expressed as programs in a simple, low-level language called SQIR, a simple quantum intermediate representation, which is deeply embedded in Coq. Optimizations and other transformations are expressed as Coq functions, which are proved correct with respect to a semantics of SQIR programs. SQIR uses a semantics of matrices of complex numbers, which is the standard for quantum computation, but treats matrices symbolically in order to reason about programs that use an arbitrary number of quantum bits. SQIRs careful design and our provided automation make it possible to write and verify a broad range of optimizations in VOQC, including full-circuit transformations from cutting-edge optimizers.
115 - Allan Blanchard 2017
Frama-C is a software analysis framework that provides a common infrastructure and a common behavioral specification language to plugins that implement various static and dynamic analyses of C programs. Most plugins do not support concurrency. We have proposed Conc2Seq, a Frama-C plugin based on program transformation, capable to leverage the existing huge code base of plugins and to handle concurrent C programs. In this paper we formalize and sketch the proof of correctness of the program transformation principle behind Conc2Seq, and present an effort towards the full mechanization of both the formalization and proofs with the proof assistant Coq.
We introduce a transformation system for concurrent constraint programming (CCP). We define suitable applicability conditions for the transformations which guarantee that the input/output CCP semantics is preserved also when distinguishing deadlocked computations from successful ones and when considering intermediate results of (possibly) non-terminating computations. The system allows us to optimize CCP programs while preserving their intended meaning: In addition to the usual benefits that one has for sequential declarative languages, the transformation of concurrent programs can also lead to the elimination of communication channels and of synchronization points, to the transformation of non-deterministic computations into deterministic ones, and to the crucial saving of computational space. Furthermore, since the transformation system preserves the deadlock behavior of programs, it can be used for proving deadlock freeness of a given program wrt a class of queries. To this aim it is sometimes sufficient to apply our transformations and to specialize the resulting program wrt the given queries in such a way that the obtained program is trivially deadlock free.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا