ﻻ يوجد ملخص باللغة العربية
Many existing fault localisation techniques become less effective or even inapplicable when not adequately supported by a rich test suite. To overcome this challenge, we present a human-in-the-loop fault localisation technique, QFiD, that works with only a small number of initial failing test cases. We augment the failing test cases with automatically generated test data and elicit oracles from a human developer to label the test cases. A new result-aware test prioritisation metric allows us to significantly reduce the labelling effort by prioritising the test cases to achieve maximum localisation accuracy. An evaluation with EvoSuite and our test prioritisation metric shows that QFiD can significantly increase the localisation accuracy. After only ten human labellings, QFiD can localise 27% and 66% of real-world faults in Defects4J at the top and within the top ten, respectively. This is a 13 and 2 times higher performance than when using the initial test cases. QFiD is also resilient to human errors, retaining 80% of its acc@1 performance on average when we introduce a 30% error rate to the simulated human oracle.
Context: Regression testing activities greatly reduce the risk of faulty software release. However, the size of the test suites grows throughout the development process, resulting in time-consuming execution of the test suite and delayed feedback to
We introduce Learn2fix, the first human-in-the-loop, semi-automatic repair technique when no bug oracle--except for the user who is reporting the bug--is available. Our approach negotiates with the user the condition under which the bug is observed.
Regression testing is an important phase to deliver software with quality. However, flaky tests hamper the evaluation of test results and can increase costs. This is because a flaky test may pass or fail non-deterministically and to identify properly
Imitation Learning is a promising paradigm for learning complex robot manipulation skills by reproducing behavior from human demonstrations. However, manipulation tasks often contain bottleneck regions that require a sequence of precise actions to ma
Automated testing tools typically create test cases that are different from what human testers create. This often makes the tools less effective, the created tests harder to understand, and thus results in tools providing less support to human tester