Do you want to publish a course? Click here

Structural Invariants for the Verification of Systems with Parameterized Architectures

110   0   0.0 ( 0 )
 Added by Christoph Welzel
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We consider parameterized concurrent systems consisting of a finite but unknown number of components, obtained by replicating a given set of finite state automata. Components communicate by executing atomic interactions whose participants update their states simultaneously. We introduce an interaction logic to specify both the type of interactions (e.g. rendez-vous, broadcast) and the topology of the system (e.g. pipeline, ring). The logic can be easily embedded in monadic second order logic of finitely many successors, and is therefore decidable. Proving safety properties of such a parameterized system, like deadlock freedom or mutual exclusion, requires to infer an inductive invariant that contains all reachable states of all system instances, and no unsafe state. We present a method to automatically synthesize inductive invariants directly from the formula describing the interactions, without costly fixed point iterations. We experimentally prove that this invariant is strong enough to verify safety properties of a large number of systems including textbook examples (dining philosophers, synchronization schemes), classical mutual exclusion algorithms, cache-coherence protocols and self-stabilization algorithms, for an arbitrary number of components.



rate research

Read More

We consider concurrent systems consisting of a finite but unknown number of components, that are replicated instances of a given set of finite state automata. The components communicate by executing interactions which are simultaneous atomic state changes of a set of components. We specify both the type of interactions (e.g. rendez-vous, broadcast) and the topology (i.e. architecture) of the system (e.g. pipeline, ring) via a decidable interaction logic, which is embedded in the classical weak sequential calculus of one successor (WS1S). Proving correctness of such system for safety properties, such as deadlock freedom or mutual exclusion, requires the inference of an inductive invariant that subsumes the set of reachable states and avoids the unsafe states. Our method synthesizes such invariants directly from the formula describing the interactions, without costly fixed point iterations. We applied our technique to the verification of several textbook examples, such as dining philosophers, mutual exclusion protocols and concurrent systems with preemption and priorities.
In this work we study the cost of local and global proofs on distributed verification. In this setting the nodes of a distributed system are provided with a nondeterministic proof for the correctness of the state of the system, and the nodes need to verify this proof by looking at only their local neighborhood in the system. Previous works have studied the model where each node is given its own, possibly unique, part of the proof as input. The cost of a proof is the maximum size of an individual label. We compare this model to a model where each node has access to the same global proof, and the cost is the size of this global proof. It is easy to see that a global proof can always include all of the local proofs, and every local proof can be a copy of the global proof. We show that there exists properties that exhibit these relative proof sizes, and also properties that are somewhere in between. In addition, we introduce a new lower bound technique and use it to prove a tight lower bound on the complexity of reversing distributed decision and establish a link between communication complexity and distributed proof complexity.
131 - Rafael C. Cardoso 2020
Software engineering of modular robotic systems is a challenging task, however, verifying that the developed components all behave as they should individually and as a whole presents its own unique set of challenges. In particular, distinct components in a modular robotic system often require different verification techniques to ensure that they behave as expected. Ensuring whole system consistency when individual components are verified using a variety of techniques and formalisms is difficult. This paper discusses how to use compositional verification to integrate the various verification techniques that are applied to modular robotic software, using a First-Order Logic (FOL) contract that captures each components assumptions and guarantees. These contracts can then be used to guide the verification of the individual components, be it by testing or the use of a formal method. We provide an illustrative example of an autonomous robot used in remote inspection. We also discuss a way of defining confidence for the verification associated with each component.
Recently a lot of multimedia applications are emerging on portable appliances. They require both the flexibility of upgradeable devices (traditionally software based) and a powerful computing engine (typically hardware). In this context, programmable HW and dynamic reconfiguration allow novel approaches to the migration of algorithms from SW to HW. Thus, in the frame of the Symbad project, we propose an industrial design flow for reconfigurable SoCs. The goal of Symbad consists of developing a system level design platform for hardware and software SoC systems including formal and semi-formal verification techniques.
Deep learning methods can be used to produce control policies, but certifying their safety is challenging. The resulting networks are nonlinear and often very large. In response to this challenge, we present OVERT: a sound algorithm for safety verification of nonlinear discrete-time closed loop dynamical systems with neural network control policies. The novelty of OVERT lies in combining ideas from the classical formal methods literature with ideas from the newer neural network verification literature. The central concept of OVERT is to abstract nonlinear functions with a set of optimally tight piecewise linear bounds. Such piecewise linear bounds are designed for seamless integration into ReLU neural network verification tools. OVERT can be used to prove bounded-time safety properties by either computing reachable sets or solving feasibility queries directly. We demonstrate various examples of safety verification for several classical benchmark examples. OVERT compares favorably to existing methods both in computation time and in tightness of the reachable set.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا