ترغب بنشر مسار تعليمي؟ اضغط هنا

Automated Mobile App Test Script Intent Generation via Image and Code Understanding

80   0   0.0 ( 0 )
 نشر من قبل Shengcheng Yu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Testing is the most direct and effective technique to ensure software quality. However, it is a burden for developers to understand the poorly-commented tests, which are common in industry environment projects. Mobile applications (app) are GUI-intensive and event-driven, so test scripts focusing on GUI interactions play a more important role in mobile app testing besides the test cases for the source code. Therefore, more attention should be paid to the user interactions and the corresponding user event responses. However, test scripts are loosely linked to apps under test (AUT) based on widget selectors, making it hard to map the operations to the functionality code of AUT. In such a situation, code understanding algorithms may lose efficacy if directly applied to mobile app test scripts. We present a novel approach, TestIntent, to infer the intent of mobile app test scripts. TestIntent combines the GUI image understanding and code understanding technologies. The test script is transferred into an operation sequence model. For each operation, TestIntent extracts the operated widget selector and link the selector to the UI layout structure, which stores the detailed information of the widgets, including coordinates, type, etc. With code understanding technologies, TestIntent can locate response methods in the source code. Afterwards, NLP algorithms are adopted to understand the code and generate descriptions. Also, TestIntent can locate widgets on the app GUI images. Then, TestIntent can understand the widget intent with an encoder-decoder model. With the combination of the results from GUI and code understanding, TestIntent generates the test intents in natural language format. We also conduct an empirical experiment, and the results prove the outstanding performance of TestIntent. A user study also declares that TestIntent can save developers time to understand test scripts.



قيم البحث

اقرأ أيضاً

In modern programming languages, exception handling is an effective mechanism to avoid unexpected runtime errors. Thus, failing to catch and handle exceptions could lead to serious issues like system crashing, resource leaking, or negative end-user e xperiences. However, writing correct exception handling code is often challenging in mobile app development due to the fast-changing nature of API libraries for mobile apps and the insufficiency of their documentation and source code examples. Our prior study shows that in practice mobile app developers cause many exception-related bugs and still use bad exception handling practices (e.g. catch an exception and do nothing). To address such problems, in this paper, we introduce two novel techniques for recommending correct exception handling code. One technique, XRank, recommends code to catch an exception likely occurring in a code snippet. The other, XHand, recommends correction code for such an occurring exception. We have developed ExAssist, a code recommendation tool for exception handling using XRank and XHand. The empirical evaluation shows that our techniques are highly effective. For example, XRank has top-1 accuracy of 70% and top-3 accuracy of 87%. XHands results are 89% and 96%, respectively.
Crowdsourced testing, as a distinct testing paradigm, has attracted much attention in software testing, especially in mobile application (app) testing field. Compared with in-house testing, crowdsourced testing outperforms because it utilize the dive rse testing environments of different crowdworkers faced with the mobile testing fragmentation problem. However, crowdsourced testing also brings some problem. The crowdworkers involved are with different expertise, and they are not professional testers. Therefore, the reports they may submit are numerous and with uneven quality. App developers have to distinguish high-quality reports from low-quality ones to help the bug revealing and fixing. Some crowdworkers would submit inconsistent test reports, which means the textual descriptions are not focusing on the attached bug occurring screenshots. Such reports cause the waste on both time and human resources of app developing and testing. To solve such a problem, we propose ReCoDe in this paper, which is designed to detect the consistency of crowdsourced test reports via deep image-and-text fusion understanding. First, according to a pre-conducted survey, ReCoDe classifies the crowdsourced test reports into 10 categories, which covers the vast majority of reported problems in the test reports. Then, for each category of bugs, we have distinct processing models. The models have a deep fusion understanding on both image information and textual descriptions. We also have conducted an experiment to evaluate ReCoDe, and the results show the effectiveness of ReCoDe to detect consistency crowdsourced test reports.
70 - Luis Cruz , Rui Abreu , David Lo 2019
Software testing is an important phase in the software development life-cycle because it helps in identifying bugs in a software system before it is shipped into the hand of its end users. There are numerous studies on how developers test general-pur pose software applications. The idiosyncrasies of mobile software applications, however, set mobile apps apart from general-purpose systems (e.g., desktop, stand-alone applications, web services). This paper investigates working habits and challenges of mobile software developers with respect to testing. A key finding of our exhaustive study, using 1000 Android apps, demonstrates that mobile apps are still tested in a very ad hoc way, if tested at all. However, we show that, as in other types of software, testing increases the quality of apps (demonstrated in user ratings and number of code issues). Furthermore, we find evidence that tests are essential when it comes to engaging the community to contribute to mobile open source software. We discuss reasons and potential directions to address our findings. Yet another relevant finding of our study is that Continuous Integration and Continuous Deployment (CI/CD) pipelines are rare in the mobile apps world (only 26% of the apps are developed in projects employing CI/CD) --- we argue that one of the main reasons is due to the lack of exhaustive and automatic testing.
The fragmentation problem has extended from Android to different platforms, such as iOS, mobile web, and even mini-programs within some applications (app). In such a situation, recording and replaying test scripts is a popular automated mobile app te sting approaches. But such approach encounters severe problems when crossing platforms. Differe
80 - Shuai Lu , Daya Guo , Shuo Ren 2021
Benchmark datasets have a significant impact on accelerating research in programming language tasks. In this paper, we introduce CodeXGLUE, a benchmark dataset to foster machine learning research for program understanding and generation. CodeXGLUE in cludes a collection of 10 tasks across 14 datasets and a platform for model evaluation and comparison. CodeXGLUE also features three baseline systems, including the BERT-style, GPT-style, and Encoder-Decoder models, to make it easy for researchers to use the platform. The availability of such data and baselines can help the development and validation of new methods that can be applied to various program understanding and generation problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا