Do you want to publish a course? Click here

On the diversity and frequency of code related to mathematical formulas in real-world Java projects

63   0   0.0 ( 0 )
 Added by Sebastian Baltes
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this paper, the term formula code refers to fragments of source code that implement a mathematical formula. We present empirical studies that analyze the diversity and frequency of formula code in open-source-software projects. In an exploratory study, we investigated what kinds of formulas are implemented in real-world Java projects and derived syntactical patterns and constraints. We refined these patterns for sum and product formulas to automatically detect formula code in software archives and to reconstruct the implemented formula in mathematical notation. In a quantitative study of a large sample of engineered Java projects on GitHub we analyzed the frequency of formula code and estimated that one of 700 lines of code in this sample implements a sum or product formula. For a sample of scientific-computing projects, we found that one of 100 lines of code implements a sum or product formula. To assess the need for tool support, we investigated the helpfulness of comments for program understanding in a sample of formula-code fragments and performed an online survey. Our findings provide first insights into the characteristics of formula code, that can motivate further studies on the role of formula code in software projects and the design of formula-related tools.

rate research

Read More

Software testing is one of the very important Quality Assurance (QA) components. A lot of researchers deal with the testing process in terms of tester motivation and how tests should or should not be written. However, it is not known from the recommendations how the tests are written in real projects. In this paper, the following was investigated: (i) the denotation of the word test in different natural languages; (ii) whether the number of occurrences of the word test correlates with the number of test cases; and (iii) what testing frameworks are mostly used. The analysis was performed on 38 GitHub open source repositories thoroughly selected from the set of 4.3M GitHub projects. We analyzed 20,340 test cases in 803 classes manually and 170k classes using an automated approach. The results show that: (i) there exists a weak correlation (r = 0.655) between the number of occurrences of the word test and the number of test cases in a class; (ii) the proposed algorithm using static file analysis correctly detected 97% of test cases; (iii) 15% of the analyzed classes used main() function whose represent regular Java programs that test the production code without using any third-party framework. The identification of such tests is very complex due to implementation diversity. The results may be leveraged to more quickly identify and locate test cases in a repository, to understand practices in customized testing solutions, and to mine tests to improve program comprehension in the future.
JSON is an essential file and data format in do-mains that span scientific computing, web APIs or configuration management. Its popularity has motivated significant software development effort to build multiple libraries to process JSON data. Previous studies focus on performance comparison among these libraries and lack a software engineering perspective.We present the first systematic analysis and comparison of the input / output behavior of 20 JSON libraries, in a single software ecosystem: Java/Maven. We assess behavior diversity by running each library against a curated set of 473 JSON files, including both well-formed and ill-formed files. The main design differences, which influence the behavior of the libraries, relate to the choice of data structure to represent JSON objects and to the encoding of numbers. We observe a remarkable behavioral diversity with ill-formed files, or corner cases such as large numbers or duplicate data. Our unique behavioral assessment of JSON libraries paves the way for a robust processing of ill-formed files, through a multi-version architecture.
Third-party libraries are a central building block to develop software systems. However, outdated third-party libraries are commonly used, and developers are usually less aware of the potential risks. Therefore, a quantitative and holistic study on usages, updates and risks of third-party libraries can provide practical insights to improve the ecosystem sustainably. In this paper, we conduct such a study in the Java ecosystem. Specifically, we conduct a library usage analysis (e.g., usage intensity and outdatedness) and a library update analysis (e.g., update intensity and delay) using 806 open-source projects. The two analyses aim to quantify usage and update practices holistically from the perspective of both open-source projects and third-party libraries. Then, we conduct a library risk analysis (e.g., potential risk and developer response) in terms of bugs with 15 popularly-used third-party libraries. This analysis aims to quantify the potential risk of using outdated libraries and the developer response to the risk. Our findings from the three analyses provide practical insights to developers and researchers on problems and potential solutions in maintaining third-party libraries (e.g., smart alerting and automated updating of outdated libraries). To demonstrate the usefulness of our findings, we propose a bug-driven alerting system for assisting developers to make confident decisions in updating third-party libra
In Software Engineering, some of the most critical activities are maintenance and evolution. However, to perform both with quality, minimizing impacts and risks, developers need to analyze and identify where the main problems come from previously. In this paper, we introduce DR-Tools Suite, a set of lightweight open-source tools that analyze and calculate source code metrics, allowing developers to visualize the results in different formats and graphs. Also, we define a set of heuristics to help the code analysis. We conducted two case studies (one academic and one industrial) to collect feedback on the tools suite, on how we will evolve the tools, as well as insights to develop new tools that support developers in their daily work.
In any sufficiently complex software system there are experts, having a deeper understanding of parts of the system than others. However, it is not always clear who these experts are and which particular parts of the system they can provide help with. We propose a framework to elicit the expertise of developers and recommend experts by analyzing complexity measures over time. Furthermore, teams can detect those parts of the software for which currently no, or only few experts exist and take preventive actions to keep the collective code knowledge and ownership high. We employed the developed approach at a medium-sized company. The results were evaluated with a survey, comparing the perceived and the computed expertise of developers. We show that aggregated code metrics can be used to identify experts for different software components. The identified experts were rated as acceptable candidates by developers in over 90% of all cases.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا