No Arabic abstract
It is widely acknowledged by researchers and practitioners that software development methodologies are generally adapted to suit specific project contexts. Research into practices-as-implemented has been fragmented and has tended to focus either on the strength of adherence to a specific methodology or on how the efficacy of specific practices is affected by contextual factors. We submit the need for a more holistic, integrated approach to investigating context-related best practice. We propose a six-dimensional model of the problem-space, with dimensions organisational drivers (why), space and time (where), culture (who), product life-cycle stage (when), product constraints (what) and engagement constraints (how). We test our model by using it to describe and explain a reported implementation study. Our contributions are a novel approach to understanding situated software practices and a preliminary model for software contexts.
A growing number of researchers suggest that software process must be tailored to a projects context to achieve maximal performance. Researchers have studied context in an ad-hoc way, with focus on those contextual factors that appear to be of significance. The result is that we have no useful basis upon which to contrast and compare studies. We are currently researching a theoretical basis for software context for the purpose of tailoring and note that a deeper consideration of the meaning of the term context is required before we can proceed. In this paper, we examine the term and present a model based on insights gained from our initial categorisation of contextual factors from the literature. We test our understanding by analysing a further six documents. Our contribution thus far is a model that we believe will support a theoretical operationalisation of software context for the purpose of process tailoring.
Many prescriptive approaches to developing software intensive systems have been advocated but each is based on assumptions about context. It has been found that practitioners do not follow prescribed methodologies, but rather select and adapt specific practices according to local needs. As researchers, we would like to be in a position to support such tailoring. However, at the present time we simply do not have sufficient evidence relating practice and context for this to be possible. We have long understood that a deeper understanding of situated software practices is crucial for progress in this area, and have been exploring this problem from a number of perspectives. In this position paper, we draw together the various aspects of our work into a holistic model and discuss the ways in which the model might be applied to support the long term goal of evidence-based decision support for practitioners. The contribution specific to this paper is a discussion on model evaluation, including a proof-of-concept demonstration of model utility. We map Kernel elements from the Essence system to our model and discuss gaps and limitations exposed in the Kernel. Finally, we overview our plans for further refining and evaluating the model.
In the domain of software engineering, our efforts as researchers to advise industry on which software practices might be applied most effectively are limited by our lack of evidence based information about the relationships between context and practice efficacy. In order to accumulate such evidence, a model for context is required. We are in the exploratory stage of evolving a model for context for situated software practices. In this paper, we overview the evolution of our proposed model. Our analysis has exposed a lack of clarity in the meanings of terms reported in the literature. Our base model dimensions are People, Place, Product and Process. Our contributions are a deepening of our understanding of how to scope contextual factors when considering software initiatives and the proposal of an initial theoretical construct for context. Study limitations relate to a possible subjectivity in the analysis and a restricted evaluation base. In the next stage in the research, we will collaborate with academics and practitioners to formally refine the model.
Many methods have been proposed to estimate how much effort is required to build and maintain software. Much of that research assumes a ``classic waterfall-based approach rather than contemporary projects (where the developing process may be more iterative than linear in nature). Also, much of that work tries to recommend a single method-- an approach that makes the dubious assumption that one method can handle the diversity of software project data. To address these drawbacks, we apply a configuration technique called ``ROME (Rapid Optimizing Methods for Estimation), which uses sequential model-based optimization (SMO) to find what combination of effort estimation techniques works best for a particular data set. We test this method using data from 1161 classic waterfall projects and 120 contemporary projects (from Github). In terms of magnitude of relative error and standardized accuracy, we find that ROME achieves better performance than existing state-of-the-art methods for both classic and contemporary problems. In addition, we conclude that we should not recommend one method for estimation. Rather, it is better to search through a wide range of different methods to find what works best for local data. To the best of our knowledge, this is the largest effort estimation experiment yet attempted and the only one to test its methods on classic and contemporary projects.
Formal verification techniques are widely used for detecting design flaws in software systems. Formal verification can be done by transforming an already implemented source code to a formal model and attempting to prove certain properties of the model (e.g. that no erroneous state can occur during execution). Unfortunately, transformations from source code to a formal model often yield large and complex models, making the verification process inefficient and costly. In order to reduce the size of the resulting model, optimization transformations can be used. Such optimizations include common algorithms known from compiler design and different program slicing techniques. Our paper describes a framework for transforming C programs to a formal model, enhanced by various optimizations for size reduction. We evaluate and compare several optimization algorithms regarding their effect on the size of the model and the efficiency of the verification. Results show that different optimizations are more suitable for certain models, justifying the need for a framework that includes several algorithms.