ترغب بنشر مسار تعليمي؟ اضغط هنا

Enablers and Impediments for Collaborative Research in Software Testing: An Empirical Exploration

138   0   0.0 ( 0 )
 نشر من قبل Eduard Paul Enoiu
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

When it comes to industrial organizations, current collaboration efforts in software engineering research are very often kept in-house, depriving these organizations off the skills necessary to build independent collaborative research. The current trend, towards empirical software engineering research, requires certain standards to be established which would guide these collaborative efforts in creating a strong partnership that promotes independent, evidence-based, software engineering research. This paper examines key enabling factors for an efficient and effective industry-academia collaboration in the software testing domain. A major finding of the research was that while technology is a strong enabler to better collaboration, it must be complemented with industrial openness to disclose research results and the use of a dedicated tooling platform. We use as an example an automated test generation approach that has been developed in the last two years collaboratively with Bombardier Transportation AB in Sweden.



قيم البحث

اقرأ أيضاً

Empirical Standards are natural-language models of a scientific communitys expectations for a specific kind of study (e.g. a questionnaire survey). The ACM SIGSOFT Paper and Peer Review Quality Initiative generated empirical standards for research me thods commonly used in software engineering. These living documents, which should be continuously revised to reflect evolving consensus around research best practices, will improve research quality and make peer review more effective, reliable, transparent and fair.
Statistics comes in two main flavors: frequentist and Bayesian. For historical and technical reasons, frequentist statistics have traditionally dominated empirical data analysis, and certainly remain prevalent in empirical software engineering. This situation is unfortunate because frequentist statistics suffer from a number of shortcomings---such as lack of flexibility and results that are unintuitive and hard to interpret---that curtail their effectiveness when dealing with the heterogeneous data that is increasingly available for empirical analysis of software engineering practice. In this paper, we pinpoint these shortcomings, and present Bayesian data analysis techniques that provide tangible benefits---as they can provide clearer results that are simultaneously robust and nuanced. After a short, high-level introduction to the basic tools of Bayesian statistics, we present the reanalysis of two empirical studies on the effectiveness of automatically generated tests and the performance of programming languages. By contrasting the original frequentist analyses with our new Bayesian analyses, we demonstrate the concrete advantages of the latter. To conclude we advocate a more prominent role for Bayesian statistical techniques in empirical software engineering research and practice.
Many science advances have been possible thanks to the use of research software, which has become essential to advancing virtually every Science, Technology, Engineering and Mathematics (STEM) discipline and many non-STEM disciplines including social sciences and humanities. And while much of it is made available under open source licenses, work is needed to develop, support, and sustain it, as underlying systems and software as well as user needs evolve. In addition, the changing landscape of high-performance computing (HPC) platforms, where performance and scaling advances are ever more reliant on software and algorithm improvements as we hit hardware scaling barriers, is causing renewed tension between sustainability of software and its performance. We must do more to highlight the trade-off between performance and sustainability, and to emphasize the need for sustainability given the fact that complex software stacks dont survive without frequent maintenance; made more difficult as a generation of developers of established and heavily-used research software retire. Several HPC forums are doing this, and it has become an active area of funding as well. In response, the authors organized and ran a panel at the SC18 conference. The objectives of the panel were to highlight the importance of sustainability, to illuminate the tension between pure performance and sustainability, and to steer SC community discussion toward understanding and addressing this issue and this tension. The outcome of the discussions, as presented in this paper, can inform choices of advance compute and data infrastructures to positively impact future research software and future research.
Software engineering research is evolving and papers are increasingly based on empirical data from a multitude of sources, using statistical tests to determine if and to what degree empirical evidence supports their hypotheses. To investigate the pra ctices and trends of statistical analysis in empirical software engineering (ESE), this paper presents a review of a large pool of papers from top-ranked software engineering journals. First, we manually reviewed 161 papers and in the second phase of our method, we conducted a more extensive semi-automatic classification of papers spanning the years 2001--2015 and 5,196 papers. Results from both review steps was used to: i) identify and analyze the predominant practices in ESE (e.g., using t-test or ANOVA), as well as relevant trends in usage of specific statistical methods (e.g., nonparametric tests and effect size measures) and, ii) develop a conceptual model for a statistical analysis workflow with suggestions on how to apply different statistical methods as well as guidelines to avoid pitfalls. Lastly, we confirm existing claims that current ESE practices lack a standard to report practical significance of results. We illustrate how practical significance can be discussed in terms of both the statistical analysis and in the practitioners context.
For software to be reliable and resilient, it is widely accepted that tests must be created and maintained alongside the software itself. One safeguard from vulnerabilities and failures in code is to ensure correct behavior on the boundaries between the input space sub-domains. So-called boundary value analysis (BVA) and boundary value testing (BVT) techniques aim to exercise those boundaries and increase test effectiveness. However, the concepts of BVA and BVT themselves are not generally well defined, and it is not clear how to identify relevant sub-domains, and thus the boundaries delineating them, given a specification. This has limited adoption and hindered automation. We clarify BVA and BVT and introduce Boundary Value Exploration (BVE) to describe techniques that support them by helping to detect and identify boundary inputs. Additionally, we propose two concrete BVE techniques based on information-theoretic distance functions: (i) an algorithm for boundary detection and (ii) the usage of software visualization to explore the behavior of the software under test and identify its boundary behavior. As an initial evaluation, we apply these techniques on a much used and well-tested date handling library. Our results reveal questionable behavior at boundaries highlighted by our techniques. In conclusion, we argue that the boundary value exploration that our techniques enable is a step towards automated boundary value analysis and testing, fostering their wider use and improving test effectiveness and efficiency.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا