No Arabic abstract
Context: Demonstrating high reliability and safety for safety-critical systems (SCSs) remains a hard problem. Diverse evidence needs to be combined in a rigorous way: in particular, results of operational testing with other evidence from design and verification. Growing use of machine learning in SCSs, by precluding most established methods for gaining assurance, makes operational testing even more important for supporting safety and reliability claims. Objective: We use Autonomous Vehicles (AVs) as a current example to revisit the problem of demonstrating high reliability. AVs are making their debut on public roads: methods for assessing whether an AV is safe enough are urgently needed. We demonstrate how to answer 5 questions that would arise in assessing an AV type, starting with those proposed by a highly-cited study. Method: We apply new theorems extending Conservative Bayesian Inference (CBI), which exploit the rigour of Bayesian methods while reducing the risk of involuntary misuse associated with now-common applications of Bayesian inference; we define additional conditions needed for applying these methods to AVs. Results: Prior knowledge can bring substantial advantages if the AV design allows strong expectations of safety before road testing. We also show how naive attempts at conservative assessment may lead to over-optimism instead; why extrapolating the trend of disengagements is not suitable for safety claims; use of knowledge that an AV has moved to a less stressful environment. Conclusion: While some reliability targets will remain too high to be practically verifiable, CBI removes a major source of doubt: it allows use of prior knowledge without inducing dangerously optimistic biases. For certain ranges of required reliability and prior beliefs, CBI thus supports feasible, sound arguments. Useful conservative claims can be derived from limited prior knowledge.
In this paper, we present ViSTA, a framework for Virtual Scenario-based Testing of Autonomous Vehicles (AV), developed as part of the 2021 IEEE Autonomous Test Driving AI Test Challenge. Scenario-based virtual testing aims to construct specific challenges posed for the AV to overcome, albeit in virtual test environments that may not necessarily resemble the real world. This approach is aimed at identifying specific issues that arise safety concerns before an actual deployment of the AV on the road. In this paper, we describe a comprehensive test case generation approach that facilitates the design of special-purpose scenarios with meaningful parameters to form test cases, both in automated and manual ways, leveraging the strength and weaknesses of either. Furthermore, we describe how to automate the execution of test cases, and analyze the performance of the AV under these test cases.
This volume contains the proceedings of the First International Workshop of Formal Techniques for Safety-Critical Systems (FTSCS 2012), held in Kyoto on November 12, 2012, as a satellite event of the ICFEM conference. The aim of this workshop is to bring together researchers and engineers interested in the application of (semi-)formal methods to improve the quality of safety-critical computer systems. FTSCS is particularly interested in industrial applications of formal methods. Topics include: - the use of formal methods for safety-critical and QoS-critical systems, including avionics, automotive, and medical systems; - methods, techniques and tools to support automated analysis, certification, debugging, etc.; - analysis methods that address the limitations of formal methods in industry; - formal analysis support for modeling languages used in industry, such as AADL, Ptolemy, SysML, SCADE, Modelica, etc.; and - code generation from validated models. The workshop received 25 submissions; 21 of these were regular papers and 4 were tool/work-in-progress/position papers. Each submission was reviewed by three referees; based on the reviews and extensive discussions, the program committee selected nine regular papers, which are included in this volume. Our program also included an invited talk by Ralf Huuck.
PerceptIn develops and commercializes autonomous vehicles for micromobility around the globe. This paper makes a holistic summary of PerceptIns development and operating experiences. This paper provides the business tale behind our product, and presents the development of the computing system for our vehicles. We illustrate the design decision made for the computing system, and show the advantage of offloading localization workloads onto an FPGA platform.
Autonomous vehicles bring the promise of enhancing the consumer experience in terms of comfort and convenience and, in particular, the safety of the autonomous vehicle. Safety functions in autonomous vehicles such as Automatic Emergency Braking and Lane Centering Assist rely on computation, information sharing, and the timely actuation of the safety functions. One opportunity to achieve robust autonomous vehicle safety is by enhancing the robustness of in-vehicle networking architectures that support built-in resiliency mechanisms. Software Defined Networking (SDN) is an advanced networking paradigm that allows fine-grained manipulation of routing tables and routing engines and the implementation of complex features such as failover, which is a mechanism of protecting in-vehicle networks from failure, and in which a standby link automatically takes over once the main link fails. In this paper, we leverage SDN network programmability features to enable resiliency in the autonomous vehicle realm. We demonstrate that a Software Defined In-Vehicle Networking (SDIVN) does not add overhead compared to Legacy In-Vehicle Networks (LIVNs) under non-failure conditions and we highlight its superiority in the case of a link failure and its timely delivery of messages. We verify the proposed architectures benefits using a simulation environment that we have developed and we validate our design choices through testing and simulations
In recent years, many deep learning models have been adopted in autonomous driving. At the same time, these models introduce new vulnerabilities that may compromise the safety of autonomous vehicles. Specifically, recent studies have demonstrated that adversarial attacks can cause a significant decline in detection precision of deep learning-based 3D object detection models. Although driving safety is the ultimate concern for autonomous driving, there is no comprehensive study on the linkage between the performance of deep learning models and the driving safety of autonomous vehicles under adversarial attacks. In this paper, we investigate the impact of two primary types of adversarial attacks, perturbation attacks and patch attacks, on the driving safety of vision-based autonomous vehicles rather than the detection precision of deep learning models. In particular, we consider two state-of-the-art models in vision-based 3D object detection, Stereo R-CNN and DSGN. To evaluate driving safety, we propose an end-to-end evaluation framework with a set of driving safety performance metrics. By analyzing the results of our extensive evaluation experiments, we find that (1) the attacks impact on the driving safety of autonomous vehicles and the attacks impact on the precision of 3D object detectors are decoupled, and (2) the DSGN model demonstrates stronger robustness to adversarial attacks than the Stereo R-CNN model. In addition, we further investigate the causes behind the two findings with an ablation study. The findings of this paper provide a new perspective to evaluate adversarial attacks and guide the selection of deep learning models in autonomous driving.