ترغب بنشر مسار تعليمي؟ اضغط هنا

Visual GUI testing in practice: An extended industrial case study

148   0   0.0 ( 0 )
 نشر من قبل Vahid Garousi
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Context: Visual GUI testing (VGT) is referred to as the latest generation GUI-based testing. It is a tool-driven technique, which uses image recognition for interacting with and asserting the behavior of the system under test. Motivated by the industrial need of a large Turkish software and systems company providing solutions in the areas of defense and IT sector, an action-research project was recently initiated to implement VGT in several teams and projects in the company. Objective: To address the above needs, we planned and carried out an empirical investigation with the goal of assessing VGT using two tools (Sikuli and JAutomate). The purpose was to determine a suitable approach and tool for VGT of a given project (software product) in the company, increase the know-how in the companys test teams. Method: Using an action-research case-study design, we investigated the use of VGT in the studied organization. Specifically, using the two selected VGT tools, we conducted a quantitative and a qualitative evaluation of VGT. Results: By assessing the list of Challenges, Problems and Limitations (CPL), proposed in previous work, in the context of our empirical study, we found that test-tool- and SUT-related CPLs were quite comparable to a previous empirical study, e.g., the synchronization between SUT and test tools were not always robust and there were failures in test tools image recognition features. When assessing the types of test maintenance activities, when executing the automated test cases on ne

قيم البحث

اقرأ أيضاً

477 - Maik Betka , Stefan Wagner 2021
Mutation testing is used to evaluate the effectiveness of test suites. In recent years, a promising variation called extreme mutation testing emerged that is computationally less expensive. It identifies methods where their functionality can be entir ely removed, and the test suite would not notice it, despite having coverage. These methods are called pseudo-tested. In this paper, we compare the execution and analysis times for traditional and extreme mutation testing and discuss what they mean in practice. We look at how extreme mutation testing impacts current software development practices and discuss open challenges that need to be addressed to foster industry adoption. For that, we conducted an industrial case study consisting of running traditional and extreme mutation testing in a large software project from the semiconductor industry that is covered by a test suite of more than 11,000 unit tests. In addition to that, we did a qualitative analysis of 25 pseudo-tested methods and interviewed two experienced developers to see how they write unit tests and gathered opinions on how useful the findings of extreme mutation testing are. Our results include execution times, scores, numbers of executed tests and mutators, reasons why methods are pseudo-tested, and an interview summary. We conclude that the shorter execution and analysis times are well noticeable in practice and show that extreme mutation testing supplements writing unit tests in conjunction with code coverage tools. We propose that pseudo-tested code should be highlighted in code coverage reports and that extreme mutation testing should be performed when writing unit tests rather than in a decoupled session. Future research should investigate how to perform extreme mutation testing while writing unit tests such that the results are available fast enough but still meaningful.
Context: Safety analysis is a predominant activity in developing safety-critical systems. It is a highly cooperative task among multiple functional departments due to increasingly sophisticated safety-critical systems and close-knit development proce sses. Communication occurs pervasively. Motivation: Effective communication channels among multiple functional departments influence safety analysis, quality as well as a safe product delivery. However, the use of communication channels during safety analysis is sometimes arbitrary and poses challenges. Objective: Investige the existing communication channels, their usage frequencies, their purposes and challenges during safety analysis in industry.. Method: Multiple case study of experts (survey: 39, interview: 21) in safety-critical companies including software developers, quality engineers and functional safety managers. Direct observations and documentation review were also conducted. Results: Popular communication channels during safety analysis include formal meetings, project coordination tools, documentation and telephone. Email, personal discussion, training, internal communication software and boards are also in use. Training involving safety analysis happens 1-4 times per year, while other aforementioned communication channels happen ranges from 1-4 times per day to 1-4 times per month. We summarise 28 purposes for these communication channels. Communication happens mostly for the purpose of clarifying safety requirements, fixing temporary problems, conflicts and obstacles and sharing safety knowledge. The top challenges are reported. Conclusion: During safety analysis, to use communication channels effectively and avoid challenges, a clear purpose of communication during safety analysis should be established at the beginning. To derive countermeasures of fixing the top 10 challenges are potential next steps.
82 - Guannan Lou , Yao Deng , Xi Zheng 2021
Autonomous driving shows great potential to reform modern transportation and its safety is attracting much attention from public. Autonomous driving systems generally include deep neural networks (DNNs) for gaining better performance (e.g., accuracy on object detection and trajectory prediction). However, compared with traditional software systems, this new paradigm (i.e., program + DNNs) makes software testing more difficult. Recently, software engineering community spent significant effort in developing new testing methods for autonomous driving systems. However, it is not clear that what extent those testing methods have addressed the needs of industrial practitioners of autonomous driving. To fill this gap, in this paper, we present the first comprehensive study to identify the current practices and needs of testing autonomous driving systems in industry. We conducted semi-structured interviews with developers from 10 autonomous driving companies and surveyed 100 developers who have worked on autonomous driving systems. Through thematic analysis of interview and questionnaire data, we identified five urgent needs of testing autonomous driving systems from industry. We further analyzed the limitations of existing testing methods to address those needs and proposed several future directions for software testing researchers.
For many decades, formal methods are considered to be the way forward to help the software industry to make more reliable and trustworthy software. However, despite this strong belief and many individual success stories, no real change in industrial software development seems to be occurring. In fact, the software industry itself is moving forward rapidly, and the gap between what formal methods can achieve and the daily software-development practice does not appear to be getting smaller (and might even be growing). In the past, many recommendations have already been made on how to develop formal-methods research in order to close this gap. This paper investigates why the gap nevertheless still exists and provides its own recommendations on what can be done by the formal-methods-research community to bridge it. Our recommendations do not focus on open research questions. In fact, formal-methods tools and techniques are already of high quality and can address many non-trivial problems; we do give some technical recommendations on how tools and techniques can be made more accessible. To a greater extent, we focus on the human aspect: how to achieve impact, how to change the way of thinking of the various stakeholders about this issue, and in particular, as a research community, how to alter our behaviour, and instead of competing, collaborate to address this issue.
86 - Daniel Kraus 2018
ReTest is a novel testing tool for Java applications with a graphical user interface (GUI), combining monkey testing and difference testing. Since this combination sidesteps the oracle problem, it enables the generation of GUI-based regression tests. ReTest makes use of evolutionary computing (EC), particularly a genetic algorithm (GA), to optimize these tests towards code coverage. While this is indeed a desirable goal in terms of software testing and potentially finds many bugs, it lacks one major ingredient: human behavior. Consequently, human testers often find the results less reasonable and difficult to interpret. This thesis proposes a new approach to improve the initial population of the GA with the aid of machine learning (ML), forming an ML-technique enhanced-EC (MLEC) algorithm. In order to do so, existing tests are exploited to extract information on how human testers use the given GUI. The obtained data is then utilized to train an artificial neural network (ANN), which ranks the available GUI actions respectively their underlying GUI components at runtime---reducing the gap between manually created and automatically generated regression tests. Although the approach is implemented on top of ReTest, it can be easily used to guide any form of monkey testing. The results show that with only little training data, the ANN is able to reach an accuracy of 82% and the resulting tests represent an improvement without reducing the overall code coverage and performance significantly.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا