ترغب بنشر مسار تعليمي؟ اضغط هنا

Determining Microservice Boundaries: A Case Study Using Static and Dynamic Software Analysis

64   0   0.0 ( 0 )
 نشر من قبل Filipe Correia
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Tiago Matias




اسأل ChatGPT حول البحث

A number of approaches have been proposed to identify service boundaries when decomposing a monolith to microservices. However, only a few use systematic methods and have been demonstrated with replicable empirical studies. We describe a systematic approach for refactoring systems to microservice architectures that uses static analysis to determine the systems structure and dynamic analysis to understand its actual behavior. A prototype of a tool was built using this approach (MonoBreaker) and was used to conduct a case study on a real-world software project. The goal was to assess the feasibility and benefits of a systematic approach to decomposition that combines static and dynamic analysis. The three study participants regarded as positive the decomposition proposed by our tool, and considered that it showed improvements over approaches that rely only on static analysis.



قيم البحث

اقرأ أيضاً

Verifiers that can prove programs correct against their full functional specification require, for programs with loops, additional annotations in the form of loop invariants---propeties that hold for every iteration of a loop. We show that significan t loop invariant candidates can be generated by systematically mutating postconditions; then, dynamic checking (based on automatically generated tests) weeds out invalid candidates, and static checking selects provably valid ones. We present a framework that automatically applies these techniques to support a program prover, paving the way for fully automatic verification without manually written loop invariants: Applied to 28 methods (including 39 different loops) from various java.util classes (occasionally modified to avoid using Java features not fully supported by the static checker), our DYNAMATE prototype automatically discharged 97% of all proof obligations, resulting in automatic complete correctness proofs of 25 out of the 28 methods---outperforming several state-of-the-art tools for fully automatic verification.
123 - Nilay Oza 2013
Cloud-based infrastructure has been increasingly adopted by the industry in distributed software development (DSD) environments. Its proponents claim that its several benefits include reduced cost, increased speed and greater productivity in software development. Empirical evaluations, however, are in the nascent stage of examining both the benefits and the risks of cloud-based infrastructure. The objective of this paper is to identify potential benefits and risks of using cloud in a DSD project conducted by teams based in Helsinki and Madrid. A cross-case qualitative analysis is performed based on focus groups conducted at the Helsinki and Madrid sites. Participants observations are used to supplement the analysis. The results of the analysis indicated that the main benefits of using cloud are rapid development, continuous integration, cost savings, code sharing, and faster ramp-up. The key risks determined by the project are dependencies, unavailability of access to the cloud, code commitment and integration, technical debt, and additional support costs. The results revealed that if such environments are not planned and set up carefully, the benefits of using cloud in DSD projects might be overshadowed by the risks associated with it.
Combinatorial testing has been suggested as an effective method of creating test cases at a lower cost. However, industrially applicable tools for modeling and combinatorial test generation are still scarce. As a direct effect, combinatorial testing has only seen a limited uptake in industry that calls into question its practical usefulness. This lack of evidence is especially troublesome if we consider the use of combinatorial test generation for industrial safety-critical control software, such as are found in trains, airplanes, and power plants. To study the industrial application of combinatorial testing, we evaluated ACTS, a popular tool for combinatorial modeling and test generation, in terms of applicability and test efficiency on industrial-sized IEC 61131-3 industrial control software running on Programmable Logic Controllers (PLC). We assessed ACTS in terms of its direct applicability in combinatorial modeling of IEC 61131-3 industrial software and the efficiency of ACTS in terms of generation time and test suite size. We used 17 industrial control programs provided by Bombardier Transportation Sweden AB and used in a train control management system. Our results show that not all combinations of algorithms and interaction strengths could generate a test suite within a realistic cut-off time. The results of the modeling process and the efficiency evaluation of ACTS are useful for practitioners considering to use combinatorial testing for industrial control software as well as for researchers trying to improve the use of such combinatorial testing techniques.
Implementing bug-free concurrent programs is a challenging task in modern software development. State-of-the-art static analyses find hundreds of concurrency bugs in production code, scaling to large codebases. Yet, fixing these bugs in constantly ch anging codebases represents a daunting effort for programmers, particularly because a fix in the concurrent code can introduce other bugs in a subtle way. In this work, we show how to harness compositional static analysis for concurrency bug detection, to enable a new Automated Program Repair (APR) technique for data races in large concurrent Java codebases. The key innovation of our work is an algorithm that translates procedure summaries inferred by the analysis tool for the purpose of bug reporting, into small local patches that fix concurrency bugs (without introducing new ones). This synergy makes it possible to extend the virtues of compositional static concurrency analysis to APR, making our approach effective (it can detect and fix many more bugs than existing tools for data race repair), scalable (it takes seconds to analyse and suggest fixes for sizeable codebases), and usable (generally, it does not require annotations from the users and can perform continuous automated repair). Our study conducted on popular open-source projects has confirmed that our tool automatically produces concurrency fixes similar to those proposed by the developers in the past.
Large-scale collaborative scientific software projects require more knowledge than any one person typically possesses. This makes coordination and communication of knowledge and expertise a key factor in creating and safeguarding software quality, wi thout which we cannot have sustainable software. However, as researchers attempt to scale up the production of software, they are confronted by problems of awareness and understanding. This presents an opportunity to develop better practices and tools that directly address these challenges. To that end, we conducted a case study of developers of the Trilinos project. We surveyed the software development challenges addressed and show how those problems are connected with what they know and how they communicate. Based on these data, we provide a series of practicable recommendations, and outline a path forward for future research.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا