ترغب بنشر مسار تعليمي؟ اضغط هنا

Verifying that a compiler preserves concurrent value-dependent information-flow security

120   0   0.0 ( 0 )
 نشر من قبل Robert Sison
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Robert Sison




اسأل ChatGPT حول البحث

It is common to prove by reasoning over source code that programs do not leak sensitive data. But doing so leaves a gap between reasoning and reality that can only be filled by accounting for the behaviour of the compiler. This task is complicated when programs enforce value-dependent information-flow security properties (in which classification of locations can vary depending on values in other locations) and complicated further when programs exploit shared-variable concurrency. Prior work has formally defined a notion of concurrency-aware refinement for preserving value-dependent security properties. However, that notion is considerably more complex than standard refinement definitions typically applied in the verification of semantics preservation by compilers. To date it remains unclear whether it can be applied to a realistic compiler, because there exist no general decomposition principles for separating it into smaller, more familiar, proof obligations. In this work, we provide such a decomposition principle, which we show can almost halve the complexity of proving secure refinement. Further, we demonstrate its applicability to secure compilation, by proving in Isabelle/HOL the preservation of value-dependent security by a proof-of-concept compiler from an imperative While language to a generic RISC-style assembly language, for programs with shared-memory concurrency mediated by locking primitives. Finally, we execute our compiler in Isabelle on a While language model of the Cross Domain Desktop Compositor, demonstrating to our knowledge the first use of a compiler verification result to carry an information-flow security property down to the assembly-level model of a non-trivial concurrent program.

قيم البحث

اقرأ أيضاً

Methods for proving that concurrent software does not leak its secrets has remained an active topic of research for at least the past four decades. Despite an impressive array of work, the present situation remains highly unsatisfactory. With contemp orary compositional proof methods one is forced to choose between expressiveness (the ability to reason about a wide variety of security policies), on the one hand, and precision (the ability to reason about complex thread interactions and program behaviours), on the other. Achieving both is essential and, we argue, requires a new style of compositional reasoning. We present VERONICA, the first program logic for proving concurrent programs information flow secure that supports compositional, high-precision reasoning about a wide range of security policies and program behaviours (e.g. expressive de-classification, value-dependent classification, secret-dependent branching). Just as importantly, VERONICA embodies a new approach for engineering such logics that can be re-used elsewhere, called decoupled functional correctness (DFC). DFC leads to a simple and clean logic, even while achieving this unprecedented combination of features. We demonstrate the virtues and versatility of VERONICA by verifying a range of example programs, beyond the reach of prior methods.
This paper presents LWeb, a framework for enforcing label-based, information flow policies in database-using web applications. In a nutshell, LWeb marries the LIO Haskell IFC enforcement library with the Yesod web programming framework. The implement ation has two parts. First, we extract the core of LIO into a monad transformer (LMonad) and then apply it to Yesods core monad. Second, we extend Yesods table definition DSL and query functionality to permit defining and enforcing label-based policies on tables and enforcing them during query processing. LWebs policy language is expressive, permitting dynamic per-table and per-row policies. We formalize the essence of LWeb in the $lambda_{LWeb}$ calculus and mechanize the proof of noninterference in Liquid Haskell. This mechanization constitutes the first metatheoretic proof carried out in Liquid Haskell. We also used LWeb to build a substantial web site hosting the Build it, Break it, Fix it security-oriented programming contest. The site involves 40 data tables and sophisticated policies. Compared to manually checking security policies, LWeb imposes a modest runtime overhead of between 2% to 21%. It reduces the trusted code base from the whole application to just 1% of the application code, and 21% of the code overall (when counting LWeb too).
216 - Shichao Liu , Ying Jiang 2015
In this paper, we extend the theory CCS for trees (CCTS) to value-passing CCTS (VCCTS), of which symbols have the capacity for receiving and sending data values, and a nonsequential semantics is proposed in an operational approach. In this concurrent model, a weak barbed congruence and a localized early weak bisimilarity are defined, and the latter relation is proved to be sufficient to justify the former. As an illustration of potential applications of VCCTS, a semantics based on VCCTS is given to a toy multi-threaded programming language featuring a core of C/C++ concurrency; and a formalization based on the operational semantics of VCCTS is proposed for some relaxed memory models, and a DRF-guarantee property with respect to VCCTS is proved.
196 - M. Boreale 2015
We put forward a model of action-based randomization mechanisms to analyse quantitative information flow (QIF) under generic leakage functions, and under possibly adaptive adversaries. This model subsumes many of the QIF models proposed so far. Our m ain contributions include the following: (1) we identify mild general conditions on the leakage function under which it is possible to derive general and significant results on adaptive QIF; (2) we contrast the efficiency of adaptive and non-adaptive strategies, showing that the latter are as efficient as the former in terms of length up to an expansion factor bounded by the number of available actions; (3) we show that the maximum information leakage over strategies, given a finite time horizon, can be expressed in terms of a Bellman equation. This can be used to compute an optimal finite strategy recursively, by resorting to standard methods like backward induction.
Fully Homomorphic Encryption (FHE) refers to a set of encryption schemes that allow computations to be applied directly on encrypted data without requiring a secret key. This enables novel application scenarios where a client can safely offload stora ge and computation to a third-party cloud provider without having to trust the software and the hardware vendors with the decryption keys. Recent advances in both FHE schemes and implementations have moved such applications from theoretical possibilities into the realm of practicalities. This paper proposes a compact and well-reasoned interface called the Homomorphic Instruction Set Architecture (HISA) for developing FHE applications. Just as the hardware ISA interface enabled hardware advances to proceed independent of software advances in the compiler and language runtimes, HISA decouples compiler optimizations and runtimes for supporting FHE applications from advancements in the underlying FHE schemes. This paper demonstrates the capabilities of HISA by building an end-to-end software stack for evaluating neural network models on encrypted data. Our stack includes an end-to-end compiler, runtime, and a set of optimizations. Our approach shows generated code, on a set of popular neural network architectures, is faster than hand-optimized implementations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا