No Arabic abstract
Process models constitute crucial artifacts in modern information systems and, hence, the proper comprehension of these models is of utmost importance in the utilization of such systems. Generally, process models are considered from two different perspectives: process modelers and readers. Both perspectives share similarities and differences in the comprehension of process models (e.g., diverse experiences when working with process models). The literature proposed many rules and guidelines to ensure a proper comprehension of process models for both perspectives. As a novel contribution in this context, this paper introduces the Process Model Comprehension Framework (PMCF) as a first step towards the measurement and quantification of the perspectives of process modelers and readers as well as the interaction of both regarding the comprehension of process models. Therefore, the PMCF describes an Evaluation Theory Tree based on the Communication Theory as well as the Conceptual Modeling Quality Framework and considers a total of 96 quality metrics in order to quantify process model comprehension. Furthermore, the PMCF was evaluated in a survey with 131 participants and has been implemented as well as applied successfully in a practical case study including 33 participants. To conclude, the PMCF allows for the identification of pitfalls and provides related information about how to assist process modelers as well as readers in order to foster and enable a proper comprehension of process models.
Many methods have been proposed to estimate how much effort is required to build and maintain software. Much of that research assumes a ``classic waterfall-based approach rather than contemporary projects (where the developing process may be more iterative than linear in nature). Also, much of that work tries to recommend a single method-- an approach that makes the dubious assumption that one method can handle the diversity of software project data. To address these drawbacks, we apply a configuration technique called ``ROME (Rapid Optimizing Methods for Estimation), which uses sequential model-based optimization (SMO) to find what combination of effort estimation techniques works best for a particular data set. We test this method using data from 1161 classic waterfall projects and 120 contemporary projects (from Github). In terms of magnitude of relative error and standardized accuracy, we find that ROME achieves better performance than existing state-of-the-art methods for both classic and contemporary problems. In addition, we conclude that we should not recommend one method for estimation. Rather, it is better to search through a wide range of different methods to find what works best for local data. To the best of our knowledge, this is the largest effort estimation experiment yet attempted and the only one to test its methods on classic and contemporary projects.
In this work, we outline a cross-domain assurance process for safety-relevant software in embedded systems. This process aims to be applied in various different application domains and in conjunction with any development methodology. With this approach we plan to reduce the growing effort for safety assessment in embedded systems by reusing safety analysis techniques and tools for the product development in different domains.
Context: Software testing plays an essential role in product quality improvement. For this reason, several software testing models have been developed to support organizations. However, adoption of testing process models inside organizations is still sporadic, with a need for more evidence about reported experiences. Aim: Our goal is to identify results gathered from the application of software testing models in organizational contexts. We focus on characteristics such as the context of use, practices applied in different testing process phases, and reported benefits & drawbacks. Method: We performed a Systematic Literature Review (SLR) focused on studies about the application of software testing processes, complemented by results from previous reviews. Results: From 35 primary studies and survey-based articles, we collected 17 testing models. Although most of the existing models are described as applicable to general contexts, the evidence obtained from the studies shows that some models are not suitable for all enterprise sizes, and inadequate for specific domains. Conclusion: The SLR evidence can serve to compare different software testing models for applicability inside organizations. Both benefits and drawbacks, as reported in the surveyed cases, allow getting a better view of the strengths and weaknesses of each model.
Testing processes and workflows in information and Internet of Things systems is a major part of the typical software testing effort. Consistent and efficient path-based test cases are desired to support these tests. Because certain parts of software system workflows have a higher business priority than others, this fact has to be involved in the generation of test cases. In this paper, we propose a Prioritized Process Test (PPT), which is a model-based test case generation algorithm that represents an alternative to currently established algorithms that use directed graphs and test requirements to model the system under test. The PPT accepts a directed multigraph as a model to express priorities, and edge weights are used instead of test requirements. To determine the test-coverage level of test cases, a test-depth-level concept is used. We compared the presented PPT with five alternatives (i.e., the Process Cycle Test, a naive reduction of test set created by the Process Cycle Test, Brute Force algorithm, Set-covering Based Solution and Matching-based Prefix Graph Solution) for edge coverage and edge-pair coverage. To assess the optimality of the path-based test cases produced by these strategies, we used fourteen metrics based on the properties of these test cases and 59 models that were created for three real-world systems. For all edge coverage, the PPT produced more optimal test cases than the alternatives in terms of the majority of the metrics. For edge-pair coverage, the PPT strategy yielded similar results to those of the alternatives. Thus, the PPT strategy is an applicable alternative, as it reflects both the required test coverage level and the business priority in parallel.
Existing model-based processes for embedded real-time systems support the analysis of various non-functional properties, most notably schedulability, through model checking, simulation or other means. The analysis results are then used for modifying the systems design, so that the expected properties are satisfied. A rigorous model-based design flow differs in that it aims at a system implementation derived from high-level models by applying a sequence of semantics-preserving transformations. Properties established at any design step are preserved throughout the subsequent steps including the executable implementation. We introduce such a design flow using a process network model of computation for application design at a high level, which combines streaming and reactive control processing with task parallelism. The schedulability of the so-called FPPNs (Fixed Priority Process Networks) is well-studied and various solutions have been presented. This article focuses on the design flows steps for deriving executable implementations on the BIP (Behavior - Interaction - Priority) runtime environment. FPPNs are designed using the TASTE toolset, a convenient architecture description interface. In this way, the developers do not program explicitly low-level real-time OS services and the schedulability properties are guaranteed throughout the design steps by construction. The approach has been validated on the design of a real spacecraft on-board application that has been scheduled for execution on an industrial multicore platform.