ﻻ يوجد ملخص باللغة العربية
Development of formal proofs of correctness of programs can increase actual and perceived reliability and facilitate better understanding of program specifications and their underlying assumptions. Tools supporting such development have been available for over 40 years, but have only recently seen wide practical use. Projects based on construction of machine-checked formal proofs are now reaching an unprecedented scale, comparable to large software projects, which leads to new challenges in proof development and maintenance. Despite its increasing importance, the field of proof engineering is seldom considered in its own right; related theories, techniques, and tools span many fields and venues. This survey of the literature presents a holistic understanding of proof engineering for program correctness, covering impact in practice, foundations, proof automation, proof organization, and practical proof development.
Designing software that controls industrial equipment is challenging, especially due to its inherent concurrent nature. Testing this kind of event driven control software is difficult and, due to the large number of possible execution scenarios only
An attacker that gains access to a cryptocurrency users private keys can perform any operation in her stead. Due to the decentralized nature of most cryptocurrencies, no entity can revert those operations. This is a central challenge for decentralize
In this paper we introduce the notion of Modal Software Engineering: automatically turning sequential, deterministic programs into semantically equivalent programs efficiently operating on inputs coming from multiple overlapping worlds. We are drawin
Quantum software plays a critical role in exploiting the full potential of quantum computing systems. As a result, it is drawing increasing attention recently. This paper defines the term quantum software engineering and introduces a quantum software
We present Revel, a partially neural reinforcement learning (RL) framework for provably safe exploration in continuous state and action spaces. A key challenge for provably safe deep RL is that repeatedly verifying neural networks within a learning l