No Arabic abstract
This paper presents a learning-based approach to detecting failures in reactive systems. The technique is based on inferring models of multiple implementations of a common specification which are pair-wise cross-checked for equivalence. Any counterexample to equivalence is flagged as suspicious and has to be analysed manually. Hence, it is possible to find possible failures in a semi-automatic way without prior modelling. We show that the approach is effective by means of a case study. For this case study, we carried out experiments in which we learned models of five implementations of MQTT brokers/servers, a protocol used in the Internet of Things. Examining these models, we found several violations of the MQTT specification. All but one of the considered implementations showed faulty behaviour. In the analysis, we discuss effectiveness and also issues we faced.
Model learning has gained increasing interest in recent years. It derives behavioural models from test data of black-box systems. The main advantage offered by such techniques is that they enable model-based analysis without access to the internals of a system. Applications range from fully automated testing over model checking to system understanding. Current work focuses on learning variations of finite state machines. However, most techniques consider discrete time. In this paper, we present a method for learning timed automata, finite state machines extended with real-valued clocks. The learning method generates a model consistent with a set of timed traces collected by testing. This generation is based on genetic programming, a search-based technique for automatic program creation. We evaluate our approach on 44 timed systems, comprising four systems from the literature and 40 randomly generated examples.
This volume contains the proceedings of the Eighth Workshop on Model-Based Testing (MBT 2013), which was held on March 17, 2013 in Rome, Italy, as a satellite event of the European Joint Conferences on Theory and Practice of Software, ETAPS 2013. The workshop is devoted to model-based testing of both software and hardware. Model-based testing uses models describing the required behavior of the system under consideration to guide such efforts as test selection and test results evaluation. Testing validates the real system behavior against models and checks that the implementation conforms to them, but is capable also to find errors in the models themselves. The first MBT workshop was held in 2004, in Barcelona. At that time MBT already had become a hot topic, but the MBT workshop was the first event devoted mostly to this domain. Since that time the area has generated enormous scientific interest, and today there are several specialized workshops and more broad conferences on software and hardware design and quality assurance covering model based testing. MBT has become one of the most powerful system analysis tools, one of the latest cutting-edge topics related is applying MBT in security analysis and testing. MBT workshop tries to keep up with current trends.
This volume contains the proceedings of the Ninth Workshop on Model-Based Testing (MBT 2014), which was held in Grenoble, France on April 6, 2014 as a satellite workshop of the European Joint Conferences on Theory and Practice of Software (ETAPS 2014).
Testing networked systems is challenging. The client or server side cannot be tested by itself. We present a solution using tool Modbat that generates test cases for Javas network library java.nio, where we test both blocking and non-blocking network functions. Our test model can dynamically simulate actions in multiple worker and client threads, thanks to a carefully orchestrated design that covers non-determinism while ensuring progress.
ReTest is a novel testing tool for Java applications with a graphical user interface (GUI), combining monkey testing and difference testing. Since this combination sidesteps the oracle problem, it enables the generation of GUI-based regression tests. ReTest makes use of evolutionary computing (EC), particularly a genetic algorithm (GA), to optimize these tests towards code coverage. While this is indeed a desirable goal in terms of software testing and potentially finds many bugs, it lacks one major ingredient: human behavior. Consequently, human testers often find the results less reasonable and difficult to interpret. This thesis proposes a new approach to improve the initial population of the GA with the aid of machine learning (ML), forming an ML-technique enhanced-EC (MLEC) algorithm. In order to do so, existing tests are exploited to extract information on how human testers use the given GUI. The obtained data is then utilized to train an artificial neural network (ANN), which ranks the available GUI actions respectively their underlying GUI components at runtime---reducing the gap between manually created and automatically generated regression tests. Although the approach is implemented on top of ReTest, it can be easily used to guide any form of monkey testing. The results show that with only little training data, the ANN is able to reach an accuracy of 82% and the resulting tests represent an improvement without reducing the overall code coverage and performance significantly.