ترغب بنشر مسار تعليمي؟ اضغط هنا

Formal Specification and Verification of Fully Asynchronous Implementations of the Data Encryption Standard

60   0   0.0 ( 0 )
 نشر من قبل EPTCS
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Wendelin Serwe




اسأل ChatGPT حول البحث

This paper presents two formal models of the Data Encryption Standard (DES), a first using the international standard LOTOS, and a second using the more recent process calculus LNT. Both models encode the DES in the style of asynchronous circuits, i.e., the data-flow blocks of the DES algorithm are represented by processes communicating via rendezvous. To ensure correctness of the models, several techniques have been applied, including model checking, equivalence checking, and comparing the results produced by a prototype automatically generated from the formal model with those of existing implementations of the DES. The complete code of the models is provided as appendices and also available on the website of the CADP verification toolbox.



قيم البحث

اقرأ أيضاً

Usage control models provide an integration of access control, digital rights, and trust management. To achieve this integration, usage control models support additional concepts such as attribute mutability and continuity of decision. However, these concepts may introduce an additional level of complexity to the underlying model, rendering its definition a cumbersome and prone to errors process. Applying a formal verification technique allows for a rigorous analysis of the interactions amongst the components, and thus for formal guarantees in respect of the correctness of a model. In this paper, we elaborate on a case study, where we express the high-level functional model of the UseCON usage control model in the TLA+ formal specification language, and verify its correctness for <=12 uses in both of its supporting authorisation models.
RISC-V is a relatively new, open instruction set architecture with a mature ecosystem and an official formal machine-readable specification. It is therefore a promising playground for formal-methods research. However, we observe that different form al-methods research projects are interested in different aspects of RISC-V and want to simplify, abstract, approximate, or ignore the other aspects. Often, they also require different encoding styles, resulting in each project starting a new formalization from-scratch. We set out to identify the commonalities between projects and to represent the RISC-V specification as a program with holes that can be instantiated differently by different projects. Our formalization of the RISC-V specification is written in Haskell and leverages existing tools rather than requiring new domain-specific tools, contrary to other approaches. To our knowledge, it is the first RISC-V specification able to serve as the interface between a processor-correctness proof and a compiler-correctness proof, while supporting several other projects with diverging requirements as well.
75 - Yusuke Kawamoto 2020
We propose an epistemic approach to formalizing statistical properties of machine learning. Specifically, we introduce a formal model for supervised learning based on a Kripke model where each possible world corresponds to a possible dataset and moda l operators are interpreted as transformation and testing on datasets. Then we formalize various notions of the classification performance, robustness, and fairness of statistical classifiers by using our extension of statistical epistemic logic (StatEL). In this formalization, we show relationships among properties of classifiers, and relevance between classification performance and robustness. As far as we know, this is the first work that uses epistemic models and logical formulas to express statistical properties of machine learning, and would be a starting point to develop theories of formal specification of machine learning.
Implementations of artificial neural networks (ANNs) might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are barely interpretable. Here, we develop and evaluate a novel symbolic v erification framework using incremental bounded model checking (BMC), satisfiability modulo theories (SMT), and invariant inference, to obtain adversarial cases and validate coverage methods in a multi-layer perceptron (MLP). We exploit incremental BMC based on interval analysis to compute boundaries from a neurons input. Then, the latter are propagated to effectively find a neurons output since it is the input of the next one. This paper describes the first bit-precise symbolic verification framework to reason over actual implementations of ANNs in CUDA, based on invariant inference, therefore providing further guarantees about finite-precision arithmetic and its rounding errors, which are routinely ignored in the existing literature. We have implemented the proposed approach on top of the efficient SMT-based bounded model checker (ESBMC), and its experimental results show that it can successfully verify safety properties, in actual implementations of ANNs, and generate real adversarial cases in MLPs. Our approach was able to verify and produce adversarial examples for 85.8% of 21 test cases considering different input images, and 100% of the properties related to covering methods. Although our verification time is higher than existing approaches, our methodology can consider fixed-point implementation aspects that are disregarded by the state-of-the-art verification methodologies.
Techniques for runtime verification often utilise specification languages that are (i) reasonably expressive, and (ii) relatively abstract (i.e. they operate on a level of abstraction that separates them from the system being monitored). Inspired by the problem of monitoring systems involved in processing data generated by the high energy physics experiments at CERN, this report proposes a specification language, Control Flow Temporal Logic (CFTL), whose distinguishing characteristic is its tight coupling with the control flow of the programs for which it is used to write specifications. This coupling leads to a departure from the typically high level of abstraction used by most temporal logics. The remaining contributions are a static-analysis based instrumentation process, which is specific to CFTL and its formulas structure, and a monitoring algorithm. The report concludes with analyses of CFTL and its monitoring algorithm when applied to a number of example programs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا