No Arabic abstract
We study the impact of synchronous and asynchronous monitoring instrumentation on runtime overheads in the context of a runtime verification framework for actor-based systems. We show that, in such a context, asynchronous monitoring incurs substantially lower overhead costs. We also show how, for certain properties that require synchronous monitoring, a hybrid approach can be used that ensures timely violation detections for the important events while, at the same time, incurring lower overhead costs that are closer to those of an asynchronous instrumentation.
When considering distributed systems, it is a central issue how to deal with interactions between components. In this paper, we investigate the paradigms of synchronous and asynchronous interaction in the context of distributed systems. We investigate to what extent or under which conditions synchronous interaction is a valid concept for specification and implementation of such systems. We choose Petri nets as our system model and consider different notions of distribution by associating locations to elements of nets. First, we investigate the concept of simultaneity which is inherent in the semantics of Petri nets when transitions have multiple input places. We assume that tokens may only be taken instantaneously by transitions on the same location. We exhibit a hierarchy of `asynchronous Petri net classes by different assumptions on possible distributions. Alternatively, we assume that the synchronisations specified in a Petri net are crucial system properties. Hence transitions and their preplaces may no longer placed on separate locations. We then answer the question which systems may be implemented in a distributed way without restricting concurrency, assuming that locations are inherently sequential. It turns out that in both settings we find semi-structural properties of Petri nets describing exactly the problematic situations for interactions in distributed systems.
Process calculi may be compared in their expressive power by means of encodings between them. A widely accepted definition of what constitutes a valid encoding for (dis)proving relative expressiveness results between process calculi was proposed by Gorla. Prior to this work, diverse encodability and separation results were generally obtained using distinct, and often incompatible, quality criteria on encodings. Textbook examples of valid encoding are the encodings proposed by Boudol and by Honda & Tokoro of the synchronous choice-free $pi$-calculus into its asynchronous fragment, illustrating that the latter is no less expressive than the former. Here I formally establish that these encodings indeed satisfy Gorlas criteria.
This volume contains the proceedings of the First International Workshop of Formal Techniques for Safety-Critical Systems (FTSCS 2012), held in Kyoto on November 12, 2012, as a satellite event of the ICFEM conference. The aim of this workshop is to bring together researchers and engineers interested in the application of (semi-)formal methods to improve the quality of safety-critical computer systems. FTSCS is particularly interested in industrial applications of formal methods. Topics include: - the use of formal methods for safety-critical and QoS-critical systems, including avionics, automotive, and medical systems; - methods, techniques and tools to support automated analysis, certification, debugging, etc.; - analysis methods that address the limitations of formal methods in industry; - formal analysis support for modeling languages used in industry, such as AADL, Ptolemy, SysML, SCADE, Modelica, etc.; and - code generation from validated models. The workshop received 25 submissions; 21 of these were regular papers and 4 were tool/work-in-progress/position papers. Each submission was reviewed by three referees; based on the reviews and extensive discussions, the program committee selected nine regular papers, which are included in this volume. Our program also included an invited talk by Ralf Huuck.
This work introduces a general multi-level model for self-adaptive systems. A self-adaptive system is seen as composed by two levels: the lower level describing the actual behaviour of the system and the upper level accounting for the dynamically changing environmental constraints on the system. In order to keep our description as general as possible, the lower level is modelled as a state machine and the upper level as a second-order state machine whose states have associated formulas over observable variables of the lower level. Thus, each state of the second-order machine identifies the set of lower-level states satisfying the constraints. Adaptation is triggered when a second-order transition is performed; this means that the current system no longer can satisfy the current high-level constraints and, thus, it has to adapt its behaviour by reaching a state that meets the new constraints. The semantics of the multi-level system is given by a flattened transition system that can be statically checked in order to prove the correctness of the adaptation model. To this aim we formalize two concepts of weak and strong adaptability providing both a relational and a logical characterization. We report that this work gives a formal computational characterization of multi-level self-adaptive systems, evidencing the important role that (theoretical) computer science could play in the emerging science of complex systems.
By exploiting the increasing surface attack of systems, cyber-attacks can cause catastrophic events, such as, remotely disable safety mechanisms. This means that in order to avoid hazards, safety and security need to be integrated, exchanging information, such as, key hazards/threats, risk evaluations, mechanisms used. This white paper describes some steps towards this integration by using models. We start by identifying some key technical challenges. Then we demonstrate how models, such as Goal Structured Notation (GSN) for safety and Attack Defense Trees (ADT) for security, can address these challenges. In particular, (1) we demonstrate how to extract in an automated fashion security relevant information from safety assessments by translating GSN-Models into ADTs; (2) We show how security results can impact the confidence of safety assessments; (3) We propose a collaborative development process where safety and security assessments are built by incrementally taking into account safety and security analysis; (4) We describe how to carry out trade-off analysis in an automated fashion, such as identifying when safety and security arguments contradict each other and how to solve such contradictions. We conclude pointing out that these are the first steps towards a wide range of techniques to support Safety and Security Engineering. As a white paper, we avoid being too technical, preferring to illustrate features by using examples and thus being more accessible.