No Arabic abstract
Test generation at the graphical user interface (GUI) level has proven to be an effective method to reveal faults. When doing so, a test generator has to repeatably decide what action to execute given the current state of the system under test (SUT). This problem of action selection usually involves random choice, which is often referred to as monkey testing. Some approaches leverage other techniques to improve the overall effectiveness, but only a few try to create human-like actions---or even entire action sequences. We have built a novel session-based recommender system that can guide test generation. This allows us to mimic past user behavior, reaching states that require complex interactions. We present preliminary results from an empirical study, where we use GitHub as the SUT. These results show that recommender systems appear to be well-suited for action selection, and that the approach can significantly contribute to the improvement of GUI-based test generation.
Artificial Intelligence (AI) software systems, such as Sentiment Analysis (SA) systems, typically learn from large amounts of data that may reflect human biases. Consequently, the machine learning model in such software systems may exhibit unintended demographic bias based on specific characteristics (e.g., gender, occupation, country-of-origin, etc.). Such biases manifest in an SA system when it predicts a different sentiment for similar texts that differ only in the characteristic of individuals described. Existing studies on revealing bias in SA systems rely on the production of sentences from a small set of short, predefined templates. To address this limitation, we present BisaFinder, an approach to discover biased predictions in SA systems via metamorphic testing. A key feature of BisaFinder is the automatic curation of suitable templates based on the pieces of text from a large corpus, using various Natural Language Processing (NLP) techniques to identify words that describe demographic characteristics. Next, BisaFinder instantiates new text from these templates by filling in placeholders with words associated with a class of a characteristic (e.g., gender-specific words such as female names, she, her). These texts are used to tease out bias in an SA system. BisaFinder identifies a bias-uncovering test case when it detects that the SA system exhibits demographic bias for a pair of texts, i.e., it predicts a different sentiment for texts that differ only in words associated with a different class (e.g., male vs. female) of a target characteristic (e.g., gender). Our empirical evaluation showed that BisaFinder can effectively create a large number of realistic and diverse test cases that uncover various biases in an SA system with a high true positive rate of up to 95.8%.
Industrial cyber-physical systems require complex distributed software to orchestrate many heterogeneous mechatronic components and control multiple physical processes. Industrial automation software is typically developed in a model-driven fashion where abstractions of physical processes called plant models are co-developed and iteratively refined along with the control code. Testing such multi-dimensional systems is extremely difficult because often models might not be accurate, do not correspond accurately with subsequent refinements, and the software must eventually be tested on the real plant, especially in safety-critical systems like nuclear plants. This paper proposes a framework wherein high-level functional requirements are used to automatically generate test cases for designs at all abstraction levels in the model-driven engineering process. Requirements are initially specified in natural language and then analyzed and specified using a formalized ontology. The requirements ontology is then refined along with controller and plant models during design and development stages such that test cases can be generated automatically at any stage. A representative industrial water process system case study illustrates the strengths of the proposed formalism. The requirements meta-model proposed by the CESAR European project is used for requirements engineering while IEC 61131-3 and model-driven concepts are used in the design and development phases. A tool resulting from the proposed framework called REBATE (Requirements Based Automatic Testing Engine) is used to generate and execute test cases for increasingly concrete controller and plant models.
ReTest is a novel testing tool for Java applications with a graphical user interface (GUI), combining monkey testing and difference testing. Since this combination sidesteps the oracle problem, it enables the generation of GUI-based regression tests. ReTest makes use of evolutionary computing (EC), particularly a genetic algorithm (GA), to optimize these tests towards code coverage. While this is indeed a desirable goal in terms of software testing and potentially finds many bugs, it lacks one major ingredient: human behavior. Consequently, human testers often find the results less reasonable and difficult to interpret. This thesis proposes a new approach to improve the initial population of the GA with the aid of machine learning (ML), forming an ML-technique enhanced-EC (MLEC) algorithm. In order to do so, existing tests are exploited to extract information on how human testers use the given GUI. The obtained data is then utilized to train an artificial neural network (ANN), which ranks the available GUI actions respectively their underlying GUI components at runtime---reducing the gap between manually created and automatically generated regression tests. Although the approach is implemented on top of ReTest, it can be easily used to guide any form of monkey testing. The results show that with only little training data, the ANN is able to reach an accuracy of 82% and the resulting tests represent an improvement without reducing the overall code coverage and performance significantly.
The testing of Deep Neural Networks (DNNs) has become increasingly important as DNNs are widely adopted by safety critical systems. While many test adequacy criteria have been suggested, automated test input generation for many types of DNNs remains a challenge because the raw input space is too large to randomly sample or to navigate and search for plausible inputs. Consequently, current testing techniques for DNNs depend on small local perturbations to existing inputs, based on the metamorphic testing principle. We propose new ways to search not over the entire image space, but rather over a plausible input space that resembles the true training distribution. This space is constructed using Variational Autoencoders (VAEs), and navigated through their latent vector space. We show that this space helps efficiently produce test inputs that can reveal information about the robustness of DNNs when dealing with realistic tests, opening the field to meaningful exploration through the space of highly structured images.
Test automation is common in software development; often one tests repeatedly to identify regressions. If the amount of test cases is large, one may select a subset and only use the most important test cases. The regression test selection (RTS) could be automated and enhanced with Artificial Intelligence (AI-RTS). This however could introduce ethical challenges. While such challenges in AI are in general well studied, there is a gap with respect to ethical AI-RTS. By exploring the literature and learning from our experiences of developing an industry AI-RTS tool, we contribute to the literature by identifying three challenges (assigning responsibility, bias in decision-making and lack of participation) and three approaches (explicability, supervision and diversity). Additionally, we provide a checklist for ethical AI-RTS to help guide the decision-making of the stakeholders involved in the process.