ترغب بنشر مسار تعليمي؟ اضغط هنا

Distributed Parametric and Statistical Model Checking

160   0   0.0 ( 0 )
 نشر من قبل EPTCS
 تاريخ النشر 2011
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Peter Bulychev




اسأل ChatGPT حول البحث

Statistical Model Checking (SMC) is a trade-off between testing and formal verification. The core idea of the approach is to conduct some simulations of the system and verify if they satisfy some given property. In this paper we show that SMC is easily parallelizable on a master/slaves architecture by introducing a series of algorithms that scale almost linearly with respect to the number of slave computers. Our approach has been implemented in the UPPAAL SMC toolset and applied on non-trivial case studies.



قيم البحث

اقرأ أيضاً

Parametric model checking (PMC) computes algebraic formulae that express key non-functional properties of a system (reliability, performance, etc.) as rational functions of the system and environment parameters. In software engineering, PMC formulae can be used during design, e.g., to analyse the sensitivity of different system architectures to parametric variability, or to find optimal system configurations. They can also be used at runtime, e.g., to check if non-functional requirements are still satisfied after environmental changes, or to select new configurations after such changes. However, current PMC techniques do not scale well to systems with complex behaviour and more than a few parameters. Our paper introduces a fast PMC (fPMC) approach that overcomes this limitation, extending the applicability of PMC to a broader class of systems than previously possible. To this end, fPMC partitions the Markov models that PMC operates with into emph{fragments} whose reachability properties are analysed independently, and obtains PMC reachability formulae by combining the results of these fragment analyses. To demonstrate the effectiveness of fPMC, we show how our fPMC tool can analyse three systems (taken from the research literature, and belonging to different application domains) with which current PMC techniques and tools struggle.
122 - Paolo Ballarini 2009
We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture.
Parametric Markov chains occur quite naturally in various applications: they can be used for a conservative analysis of probabilistic systems (no matter how the parameter is chosen, the system works to specification); they can be used to find optimal settings for a parameter; they can be used to visualise the influence of system parameters; and they can be used to make it easy to adjust the analysis for the case that parameters change. Unfortunately, these advancements come at a cost: parametric model checking is---or rather was---often slow. To make the analysis of parametric Markov models scale, we need three ingredients: clever algorithms, the right data structure, and good engineering. Clever algorithms are often the main (or sole) selling point; and we face the trouble that this paper focuses on -- the latter ingredients to efficient model checking. Consequently, our easiest claim to fame is in the speed-up we have often realised when comparing to the state of the art.
122 - Nilay Oza 2013
Cloud-based infrastructure has been increasingly adopted by the industry in distributed software development (DSD) environments. Its proponents claim that its several benefits include reduced cost, increased speed and greater productivity in software development. Empirical evaluations, however, are in the nascent stage of examining both the benefits and the risks of cloud-based infrastructure. The objective of this paper is to identify potential benefits and risks of using cloud in a DSD project conducted by teams based in Helsinki and Madrid. A cross-case qualitative analysis is performed based on focus groups conducted at the Helsinki and Madrid sites. Participants observations are used to supplement the analysis. The results of the analysis indicated that the main benefits of using cloud are rapid development, continuous integration, cost savings, code sharing, and faster ramp-up. The key risks determined by the project are dependencies, unavailability of access to the cloud, code commitment and integration, technical debt, and additional support costs. The results revealed that if such environments are not planned and set up carefully, the benefits of using cloud in DSD projects might be overshadowed by the risks associated with it.
Runtime verification is a computing analysis paradigm based on observing a system at runtime (to check its expected behaviour) by means of monitors generated from formal specifications. Distributed runtime verification is runtime verification in conn ection with distributed systems: it comprises both monitoring of distributed systems and using distributed systems for monitoring. Aggregate computing is a programming paradigm based on a reference computing machine that is the aggregate collection of devices that cooperatively carry out a computational process: the details of behaviour, position and number of devices are largely abstracted away, to be replaced with a space-filling computational environment. In this position paper we argue, by means of simple examples, that aggregate computing is particularly well suited for implementing distributed monitors. Our aim is to foster further research on how to generate aggregate computing monitors from suitable formal specifications.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا