No Arabic abstract
Ventricular Fibrillation is a disorganized electrical excitation of the heart that results in inadequate blood flow to the body. It usually ends in death within seconds. The most common way to treat the symptoms of fibrillation is to implant a medical device, known as an Implantable Cardioverter Defibrillator (ICD), in the patients body. Model-based verification can supply rigorous proofs of safety and efficacy. In this paper, we build a hybrid system model of the human heart+ICD closed loop, and show it to be a STORMED system, a class of o-minimal hybrid systems that admit finite bisimulations. In general, it may not be possible to compute the bisimulation. We show that approximate reachability can yield a finite simulation for STORMED systems, which improves on the existing verification procedure. In the process, we show that certain compositions respect the STORMED property. Thus it is possible to model check important formal properties of ICDs in a closed loop with the heart, such as delayed therapy, missed therapy, or inappropriately administered therapy. The results of this paper are theoretical and motivate the creation of concrete model checking procedures for STORMED systems.
Background: One of the common causes of sudden cardiac death (SCD) in young people is hypertrophic cardiomyopathy (HCM) and the primary prevention of SCD is with an implantable cardioverter defibrillators (ICD). Concerning the incidence of appropriate ICD therapy and the complications associated with ICD implantation and discharge, patients with implanted ICDs are closely monitored and interrogation reports are generated from clinical consultations. Methods: In this study, we compared the performance of structured device data and unstructured interrogation reports for extracting information of ICD therapy and heart rhythm. We sampled 687 reports with a gold standard generated through manual chart review. A rule-based natural language processing (NLP) system was developed using 480 reports and the information in the corresponding device data was aggregated for the task. We compared the performance of the NLP system with information aggregated from structured device data using the remaining 207 reports. Results: The rule-based NLP system achieved F-measure of 0.92 and 0.98 for ICD therapy and heart rhythm while the performance of aggregating device data was significantly lower with F-measure of 0.78 and 0.45 respectively. Limitations of using only structured device data include no differentiation of real events from management events, data availability, and disparate perspectives of vendor and data granularity while using interrogation reports needs to overcome non-representative keyword/pattern and contextual errors. Conclusions: Extracting phenotyping information from data generated in real-world requires the incorporation of medical knowledge. It is essential to analyze, compare, and harmonize multiple data sources for real-world evidence generation.
Digital controllers have several advantages with respect to their flexibility and designs simplicity. However, they are subject to problems that are not faced by analog controllers. In particular, these problems are related to the finite word-length implementation that might lead to overflows, limit cycles, and time constraints in fixed-point processors. This paper proposes a new method to detect designs errors in digital controllers using a state-of-the art bounded model checker based on satisfiability modulo theories. The experiments with digital controllers for a ball and beam plant demonstrate that the proposed method can be very effective in finding errors in digital controllers than other existing approaches based on traditional simulations tools.
Parametric model checking (PMC) computes algebraic formulae that express key non-functional properties of a system (reliability, performance, etc.) as rational functions of the system and environment parameters. In software engineering, PMC formulae can be used during design, e.g., to analyse the sensitivity of different system architectures to parametric variability, or to find optimal system configurations. They can also be used at runtime, e.g., to check if non-functional requirements are still satisfied after environmental changes, or to select new configurations after such changes. However, current PMC techniques do not scale well to systems with complex behaviour and more than a few parameters. Our paper introduces a fast PMC (fPMC) approach that overcomes this limitation, extending the applicability of PMC to a broader class of systems than previously possible. To this end, fPMC partitions the Markov models that PMC operates with into emph{fragments} whose reachability properties are analysed independently, and obtains PMC reachability formulae by combining the results of these fragment analyses. To demonstrate the effectiveness of fPMC, we show how our fPMC tool can analyse three systems (taken from the research literature, and belonging to different application domains) with which current PMC techniques and tools struggle.
Statistical Model Checking (SMC) is a trade-off between testing and formal verification. The core idea of the approach is to conduct some simulations of the system and verify if they satisfy some given property. In this paper we show that SMC is easily parallelizable on a master/slaves architecture by introducing a series of algorithms that scale almost linearly with respect to the number of slave computers. Our approach has been implemented in the UPPAAL SMC toolset and applied on non-trivial case studies.
In this paper we investigate the applicability of standard model checking approaches to verifying properties in probabilistic programming. As the operational model for a standard probabilistic program is a potentially infinite parametric Markov decision process, no direct adaption of existing techniques is possible. Therefore, we propose an on-the-fly approach where the operational model is successively created and verified via a step-wise execution of the program. This approach enables to take key features of many probabilistic programs into account: nondeterminism and conditioning. We discuss the restrictions and demonstrate the scalability on several benchmarks.