ترغب بنشر مسار تعليمي؟ اضغط هنا

An Efficient Method for Uncertainty Propagation in Robust Software Performance Estimation

116   0   0.0 ( 0 )
 نشر من قبل Aldeida Aleti Dr
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Software engineers often have to estimate the performance of a software system before having full knowledge of the system parameters, such as workload and operational profile. These uncertain parameters inevitably affect the accuracy of quality evaluations, and the ability to judge if the system can continue to fulfil performance requirements if parameter results are different from expected. Previous work has addressed this problem by modelling the potential values of uncertain parameters as probability distribution functions, and estimating the robustness of the system using Monte Carlo-based methods. These approaches require a large number of samples, which results in high computational cost and long waiting times. To address the computational inefficiency of existing approaches, we employ Polynomial Chaos Expansion (PCE) as a rigorous method for uncertainty propagation and further extend its use to robust performance estimation. The aim is to assess if the software system is robust, i.e., it can withstand possible changes in parameter values, and continue to meet performance requirements. PCE is a very efficient technique, and requires significantly less computations to accurately estimate the distribution of performance indices. Through three very different case studies from different phases of software development and heterogeneous application domains, we show that PCE can accurately (>97%) estimate the robustness of various performance indices, and saves up to 225 hours of performance evaluation time when compared to Monte Carlo Simulation.

قيم البحث

اقرأ أيضاً

Software effort estimation models are typically developed based on an underlying assumption that all data points are equally relevant to the prediction of effort for future projects. The dynamic nature of several aspects of the software engineering p rocess could mean that this assumption does not hold in at least some cases. This study employs three kernel estimator functions to test the stationarity assumption in five software engineering datasets that have been used in the construction of software effort estimation models. The kernel estimators are used in the generation of nonuniform weights which are subsequently employed in weighted linear regression modeling. In each model, older projects are assigned smaller weights while the more recently completed projects are assigned larger weights, to reflect their potentially greater relevance to present or future projects that need to be estimated. Prediction errors are compared to those obtained from uniform models. Our results indicate that, for the datasets that exhibit underlying nonstationary processes, uniform models are more accurate than the nonuniform models; that is, models based on kernel estimator functions are worse than the models where no weighting was applied. In contrast, the accuracies of uniform and nonuniform models for datasets that exhibited stationary processes were essentially equivalent. Our analysis indicates that as the heterogeneity of a dataset increases, the effect of stationarity is overridden. The results of our study also confirm prior findings that the accuracy of effort estimation models is independent of the type of kernel estimator function used in model development.
A key goal of empirical research in software engineering is to assess practical significance, which answers whether the observed effects of some compared treatments show a relevant difference in practice in realistic scenarios. Even though plenty of standard techniques exist to assess statistical significance, connecting it to practical significance is not straightforward or routinely done; indeed, only a few empirical studies in software engineering assess practical significance in a principled and systematic way. In this paper, we argue that Bayesian data analysis provides suitable tools to assess practical significance rigorously. We demonstrate our claims in a case study comparing different test techniques. The case studys data was previously analyzed (Afzal et al., 2015) using standard techniques focusing on statistical significance. Here, we build a multilevel model of the same data, which we fit and validate using Bayesian techniques. Our method is to apply cumulative prospect theory on top of the statistical model to quantitatively connect our statistical analysis output to a practically meaningful context. This is then the basis both for assessing and arguing for practical significance. Our study demonstrates that Bayesian analysis provides a technically rigorous yet practical framework for empirical software engineering. A substantial side effect is that any uncertainty in the underlying data will be propagated through the statistical model, and its effects on practical significance are made clear. Thus, in combination with cumulative prospect theory, Bayesian analysis supports seamlessly assessing practical significance in an empirical software engineering context, thus potentially clarifying and extending the relevance of research for practitioners.
Reliable effort estimation remains an ongoing challenge to software engineers. Accurate effort estimation is the state of art of software engineering, effort estimation of software is the preliminary phase between the client and the business enterpri se. The relationship between the client and the business enterprise begins with the estimation of the software. The credibility of the client to the business enterprise increases with the accurate estimation. Effort estimation often requires generalizing from a small number of historical projects. Generalization from such limited experience is an inherently under constrained problem. Accurate estimation is a complex process because it can be visualized as software effort prediction, as the term indicates prediction never becomes an actual. This work follows the basics of the empirical software effort estimation models. The goal of this paper is to study the empirical software effort estimation. The primary conclusion is that no single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates.
We consider a global variable consensus ADMM algorithm for solving large-scale PDE parameter estimation problems asynchronously and in parallel. To this end, we partition the data and distribute the resulting subproblems among the available workers. Since each subproblem can be associated with different forward models and right-hand-sides, this provides ample options for tailoring the method to different applications including multi-source and multi-physics PDE parameter estimation problems. We also consider an asynchronous variant of consensus ADMM to reduce communication and latency. Our key contribution is a novel weighting scheme that empirically increases the progress made in early iterations of the consensus ADMM scheme and is attractive when using a large number of subproblems. This makes consensus ADMM competitive for solving PDE parameter estimation, which incurs immense costs per iteration. The weights in our scheme are related to the uncertainty associated with the solutions of each subproblem. We exemplarily show that the weighting scheme combined with the asynchronous implementation improves the time-to-solution for a 3D single-physics and multiphysics PDE parameter estimation problems.
The complexity of software tasks and the uncertainty of crowd developer behaviors make it challenging to plan crowdsourced software development (CSD) projects. In a competitive crowdsourcing marketplace, competition for shared worker resources from m ultiple simultaneously open tasks adds another layer of uncertainty to the potential outcomes of software crowdsourcing. These factors lead to the need for supporting CSD managers with automated scheduling to improve the visibility and predictability of crowdsourcing processes and outcomes. To that end, this paper proposes an evolutionary algorithm-based task scheduling method for crowdsourced software development. The proposed evolutionary scheduling method uses a multiobjective genetic algorithm to recommend an optimal task start date. The method uses three fitness functions, based on project duration, task similarity, and task failure prediction, respectively. The task failure fitness function uses a neural network to predict the probability of task failure with respect to a specific task start date. The proposed method then recommends the best tasks start dates for the project as a whole and each individual task so as to achieve the lowest project failure ratio. Experimental results on 4 projects demonstrate that the proposed method has the potential to reduce project duration by a factor of 33-78%.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا