No Arabic abstract
Modern software development relies heavily on Application Programming Interface (API) libraries. However, there are often certain constraints on using API elements in such libraries. Failing to follow such constraints (API misuse) could lead to serious programming errors. Many approaches have been proposed to detect API misuses, but they still have low accuracy and cannot repair the detected misuses. In this paper, we propose SAM, a novel approach to detect and repair API misuses automatically. SAM uses statistical models to describe five factors involving in any API method call: related method calls, exceptions, pre-conditions, post-conditions, and values of arguments. These statistical models are trained from a large repository of high-quality production code. Then, given a piece of code, SAM verifies each of its method calls with the trained statistical models. If a factor has a sufficiently low probability, the corresponding call is considered as an API misuse. SAM performs an optimal search for editing operations to apply on the code until it has no API issue.
When developing mobile apps, programmers rely heavily on standard API frameworks and libraries. However, learning and using those APIs is often challenging due to the fast-changing nature of API frameworks for mobile systems, the complexity of API usages, the insufficiency of documentation, and the unavailability of source code examples. In this paper, we propose a novel approach to learn API usages from bytecode of Android mobile apps. Our core contributions include: i) ARUS, a graph-based representation of API usage scenarios; ii) HAPI, a statistical, generative model of API usages; and iii) three algorithms to extract ARUS from apps bytecode, to train HAPI based on method call sequences extracted from ARUS, and to recommend method calls in code completion engines using the trained HAPI. Our empirical evaluation suggests that our approach can learn useful API usage models which can provide recommendations with higher levels of accuracy than the baseline n-gram model.
Modern applications increasingly interact with web APIs -- reusable components, deployed and operated outside the application, and accessed over the network. Their existence, arguably, spurs application innovations, making it easy to integrate data or functionalities. While previous work has analyzed the ecosystem of web APIs and their design, little is known about web API quality at runtime. This gap is critical, as qualities including availability, latency, or provider security preferences can severely impact applications and user experience. In this paper, we revisit a 3-month, geo-distributed benchmark of popular web APIs, originally performed in 2015. We repeat this benchmark in 2018 and compare results from these two benchmarks regarding availability and latency. We furthermore introduce new results from assessing provider security preferences, collected both in 2015 and 2018, and results from our attempts to reach out to API providers with the results from our 2015 experiments. Our extensive experiments show that web API qualities vary 1.) based on the geo-distribution of clients, 2.) during our individual experiments, and 3.) between the two experiments. Our findings provide evidence to foster the discussion around web API quality, and can act as a basis for the creation of tools and approaches to mitigate quality issues.
Nowadays, developers often reuse existing APIs to implement their programming tasks. A lot of API usage patterns are mined to help developers learn API usage rules. However, there are still many missing variables to be synthesized when developers integrate the patterns into their programming context. To deal with this issue, we propose a comprehensive approach to integrate API usage patterns in this paper. We first perform an empirical study by analyzing how API usage patterns are integrated in real-world projects. We find the expressions for variable synthesis is often non-trivial and can be divided into 5 syntax types. Based on the observation, we promote an approach to help developers interactively complete API usage patterns. Compared to the existing code completion techniques, our approach can recommend infrequent expressions accompanied with their real-world usage examples according to the user intent. The evaluation shows that our approach could assist users to integrate APIs more efficiently and complete the programming tasks faster than existing works.
Many JavaScript applications perform HTTP requests to web APIs, relying on the request URL, HTTP method, and request data to be constructed correctly by string operations. Traditional compile-time error checking, such as calling a non-existent method in Java, are not available for checking whether such requests comply with the requirements of a web API. In this paper, we propose an approach to statically check web API requests in JavaScript. Our approach first extracts a requests URL string, HTTP method, and the corresponding request data using an inter-procedural string analysis, and then checks whether the request conforms to given web API specifications. We evaluated our approach by checking whether web API requests in JavaScript files mined from GitHub are consistent or inconsistent with publicly available API specifications. From the 6575 requests in scope, our approach determined whether the requests URL and HTTP method was consistent or inconsistent with web API specifications with a precision of 96.0%. Our approach also correctly determined whether extracted request data was consistent or inconsistent with the data requirements with a precision of 87.9% for payload data and 99.9% for query data. In a systematic analysis of the inconsistent cases, we found that many of them were due to errors in the client code. The here proposed checker can be integrated with code editors or with continuous integration tools to warn programmers about code containing potentially erroneous requests.
Testing networked systems is challenging. The client or server side cannot be tested by itself. We present a solution using tool Modbat that generates test cases for Javas network library java.nio, where we test both blocking and non-blocking network functions. Our test model can dynamically simulate actions in multiple worker and client threads, thanks to a carefully orchestrated design that covers non-determinism while ensuring progress.