No Arabic abstract
This report documents safety assurance argument templates to support the deployment and operation of autonomous systems that include machine learning (ML) components. The document presents example safety argument templates covering: the development of safety requirements, hazard analysis, a safety monitor architecture for an autonomous system including at least one ML element, a component with ML and the adaptation and change of the system over time. The report also presents generic templates for argument defeaters and evidence confidence that can be used to strengthen, review, and adapt the templates as necessary. This report is made available to get feedback on the approach and on the templates. This work was sponsored by the UK Dstl under the R-cloud framework.
Innovation in the world of today is mainly driven by software. Companies need to continuously rejuvenate their product portfolios with new features to stay ahead of their competitors. For example, recent trends explore the application of blockchains to domains other than finance. This paper analyzes the state-of-the-art for safety-critical systems as found in modern vehicles like self-driving cars, smart energy systems, and home automation focusing on specific challenges where key ideas behind blockchains might be applicable. Next, potential benefits unlocked by applying such ideas are presented and discussed for the respective usage scenario. Finally, a research agenda is outlined to summarize remaining challenges for successfully applying blockchains to safety-critical cyber-physical systems.
Control schemes for autonomous systems are often designed in a way that anticipates the worst case in any situation. At runtime, however, there could exist opportunities to leverage the characteristics of specific environment and operation context for more efficient control. In this work, we develop an online intermittent-control framework that combines formal verification with model-based optimization and deep reinforcement learning to opportunistically skip certain control computation and actuation to save actuation energy and computational resources without compromising system safety. Experiments on an adaptive cruise control system demonstrate that our approach can achieve significant energy and computation savings.
Context: Safety analysis is a predominant activity in developing safety-critical systems. It is a highly cooperative task among multiple functional departments due to increasingly sophisticated safety-critical systems and close-knit development processes. Communication occurs pervasively. Motivation: Effective communication channels among multiple functional departments influence safety analysis, quality as well as a safe product delivery. However, the use of communication channels during safety analysis is sometimes arbitrary and poses challenges. Objective: Investige the existing communication channels, their usage frequencies, their purposes and challenges during safety analysis in industry.. Method: Multiple case study of experts (survey: 39, interview: 21) in safety-critical companies including software developers, quality engineers and functional safety managers. Direct observations and documentation review were also conducted. Results: Popular communication channels during safety analysis include formal meetings, project coordination tools, documentation and telephone. Email, personal discussion, training, internal communication software and boards are also in use. Training involving safety analysis happens 1-4 times per year, while other aforementioned communication channels happen ranges from 1-4 times per day to 1-4 times per month. We summarise 28 purposes for these communication channels. Communication happens mostly for the purpose of clarifying safety requirements, fixing temporary problems, conflicts and obstacles and sharing safety knowledge. The top challenges are reported. Conclusion: During safety analysis, to use communication channels effectively and avoid challenges, a clear purpose of communication during safety analysis should be established at the beginning. To derive countermeasures of fixing the top 10 challenges are potential next steps.
In this work, we outline a cross-domain assurance process for safety-relevant software in embedded systems. This process aims to be applied in various different application domains and in conjunction with any development methodology. With this approach we plan to reduce the growing effort for safety assessment in embedded systems by reusing safety analysis techniques and tools for the product development in different domains.
We consider the pressing question of how to model, verify, and ensure that autonomous systems meet certain textit{obligations} (like the obligation to respect traffic laws), and refrain from impermissible behavior (like recklessly changing lanes). Temporal logics are heavily used in autonomous system design; however, as we illustrate here, temporal (alethic) logics alone are inappropriate for reasoning about obligations of autonomous systems. This paper proposes the use of Dominance Act Utilitarianism (DAU), a deontic logic of agency, to encode and reason about obligations of autonomous systems. We use DAU to analyze Intels Responsibility-Sensitive Safety (RSS) proposal as a real-world case study. We demonstrate that DAU can express well-posed RSS rules, formally derive undesirable consequences of these rules, illustrate how DAU could help design systems that have specific obligations, and how to model-check DAU obligations.