ترغب بنشر مسار تعليمي؟ اضغط هنا

Applications of Multi-view Learning Approaches for Software Comprehension

49   0   0.0 ( 0 )
 نشر من قبل Amir Saeidi
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Program comprehension concerns the ability of an individual to make an understanding of an existing software system to extend or transform it. Software systems comprise of data that are noisy and missing, which makes program understanding even more difficult. A software system consists of various views including the module dependency graph, execution logs, evolutionary information and the vocabulary used in the source code, that collectively defines the software system. Each of these views contain unique and complementary information; together which can more accurately describe the data. In this paper, we investigate various techniques for combining different sources of information to improve the performance of a program comprehension task. We employ state-of-the-art techniques from learning to 1) find a suitable similarity function for each view, and 2) compare different multi-view learning techniques to decompose a software system into high-level units and give component-level recommendations for refactoring of the system, as well as cross-view source code search. The experiments conducted on 10 relatively large Java software systems show that by fusing knowledge from different views, we can guarantee a lower bound on the quality of the modularization and even improve upon it. We proceed by integrating different sources of information to give a set of high-level recommendations as to how to refactor the software system. Furthermore, we demonstrate how learning a joint subspace allows for performing cross-modal retrieval across views, yielding results that are more aligned with what the user intends by the query. The multi-view approaches outlined in this paper can be employed for addressing problems in software engineering that can be encoded in terms of a learning problem, such as software bug prediction and feature location.

قيم البحث

اقرأ أيضاً

Internet of Things Driven Data Analytics (IoT-DA) has the potential to excel data-driven operationalisation of smart environments. However, limited research exists on how IoT-DA applications are designed, implemented, operationalised, and evolved in the context of software and system engineering life-cycle. This article empirically derives a framework that could be used to systematically investigate the role of software engineering (SE) processes and their underlying practices to engineer IoT-DA applications. First, using existing frameworks and taxonomies, we develop an evaluation framework to evaluate software processes, methods, and other artefacts of SE for IoT-DA. Secondly, we perform a systematic mapping study to qualitatively select 16 processes (from academic research and industrial solutions) of SE for IoT-DA. Thirdly, we apply our developed evaluation framework based on 17 distinct criterion (a.k.a. process activities) for fine-grained investigation of each of the 16 SE processes. Fourthly, we apply our proposed framework on a case study to demonstrate development of an IoT-DA healthcare application. Finally, we highlight key challenges, recommended practices, and the lessons learnt based on frameworks support for process-centric software engineering of IoT-DA. The results of this research can facilitate researchers and practitioners to engineer emerging and next-generation of IoT-DA software applications.
The recent growth of the Internet of Things (IoT) devices has lead to the rise of various complex applications where these applications involve interactions among large numbers of heterogeneous devices. An important challenge that needs to be address ed is to facilitate the agile development of IoT applications with minimal effort by the various parties involved in the process. However, IoT application development is challenging due to the wide variety of hardware and software technologies that interact in an IoT system. Moreover, it involves dealing with issues that are attributed to different software life-cycle phases: development, deployment, and progression. In this paper, we examine three IoT application development approaches: Mashup-based development, Model-based development, and Function-as-a-Service based development. The advantages and disadvantages of each approach are discussed from different perspectives, including reliability, deployment expeditiousness, ease of use, and targeted audience. Finally, we propose a simple solution where these techniques are combined to deliver reliable applications while reducing costs and time to release.
This paper addresses the challenge of understanding the waiting dependencies between the threads and hardware resources required to complete a task. The objective is to improve software performance by detecting the underlying bottlenecks caused by sy stem-level blocking dependencies. In this paper, we use a system level tracing approach to extract a Waiting Dependency Graph that shows the breakdown of a task execution among all the interleaving threads and resources. The method allows developers and system administrators to quickly discover how the total execution time is divided among its interacting threads and resources. Ultimately, the method helps detecting bottlenecks and highlighting their possible causes. Our experiments show the effectiveness of the proposed approach in several industry-level use cases. Three performance anomalies are analysed and explained using the proposed approach. Evaluating the method efficiency reveals that the imposed overhead never exceeds 10.1%, therefore making it suitable for in-production environments.
Industry in all sectors is experiencing a profound digital transformation that puts software at the core of their businesses. In order to react to continuously changing user requirements and dynamic markets, companies need to build robust workflows t hat allow them to increase their agility in order to remain competitive. This increasingly rapid transformation, especially in domains like IoT or Cloud computing, poses significant challenges to guarantee high quality software, since dynamism and agile short-term planning reduce the ability to detect and manage risks. In this paper, we describe the main challenges related to managing risk in agile software development, building on the experience of more than 20 agile coaches operating continuously for 15 years with hundreds of teams in industries in all sectors. We also propose a framework to manage risks that considers those challenges and supports collaboration, agility, and continuous development. An implementation of that framework is then described in a tool that handles risks and mitigation actions associated with the development of multi-cloud applications. The methodology and the tool have been validated by a team of evaluators that were asked to consider its use in developing an urban smart mobility service and an airline flight scheduling system.
120 - Yanming Yang , Xin Xia , David Lo 2020
In 2006, Geoffrey Hinton proposed the concept of training Deep Neural Networks (DNNs) and an improved model training method to break the bottleneck of neural network development. More recently, the introduction of AlphaGo in 2016 demonstrated the pow erful learning ability of deep learning and its enormous potential. Deep learning has been increasingly used to develop state-of-the-art software engineering (SE) research tools due to its ability to boost performance for various SE tasks. There are many factors, e.g., deep learning model selection, internal structure differences, and model optimization techniques, that may have an impact on the performance of DNNs applied in SE. Few works to date focus on summarizing, classifying, and analyzing the application of deep learning techniques in SE. To fill this gap, we performed a survey to analyse the relevant studies published since 2006. We first provide an example to illustrate how deep learning techniques are used in SE. We then summarize and classify different deep learning techniques used in SE. We analyzed key optimization technologies used in these deep learning models, and finally describe a range of key research topics using DNNs in SE. Based on our findings, we present a set of current challenges remaining to be investigated and outline a proposed research road map highlighting key opportunities for future work.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا