Do you want to publish a course? Click here

How do we Evaluate Self-adaptive Software Systems?

144   0   0.0 ( 0 )
 Added by Thomas Vogel
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

With the increase of research in self-adaptive systems, there is a need to better understand the way research contributions are evaluated. Such insights will support researchers to better compare new findings when developing new knowledge for the community. However, so far there is no clear overview of how evaluations are performed in self-adaptive systems. To address this gap, we conduct a mapping study. The study focuses on experimental evaluations published in the last decade at the prime venue of research in software engineering for self-adaptive systems -- the International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS). Results point out that specifics of self-adaptive systems require special attention in the experimental process, including the distinction of the managing system (i.e., the target of evaluation) and the managed system, the presence of uncertainties that affect the system behavior and hence need to be taken into account in data analysis, and the potential of managed systems to be reused across experiments, beyond replications. To conclude, we offer a set of suggestions derived from our study that can be used as input to enhance future experiments in self-adaptive systems.



rate research

Read More

Hashing produces compact representations for documents, to perform tasks like classification or retrieval based on these short codes. When hashing is supervised, the codes are trained using labels on the training data. This paper first shows that the evaluation protocols used in the literature for supervised hashing are not satisfactory: we show that a trivial solution that encodes the output of a classifier significantly outperforms existing supervised or semi-supervised methods, while using much shorter codes. We then propose two alternative protocols for supervised hashing: one based on retrieval on a disjoint set of classes, and another based on transfer learning to new classes. We provide two baseline methods for image-related tasks to assess the performance of (semi-)supervised hashing: without coding and with unsupervised codes. These baselines give a lower- and upper-bound on the performance of a supervised hashing scheme.
Technical Debt management decisions always imply a trade-off among outcomes at different points in time. In such intertemporal choices, distant outcomes are often valued lower than close ones, a phenomenon known as temporal discounting. Technical Debt research largely develops prescriptive approaches for how software engineers should make such decisions. Few have studied how they actually make them. This leaves open central questions about how software practitioners make decisions. This paper investigates how software practitioners discount uncertain future outcomes and whether they exhibit temporal discounting. We adopt experimental methods from intertemporal choice, an active area of research. We administered an online questionnaire to 33 developers from two companies in which we presented choices between developing a feature and making a longer-term investment in architecture. The results show wide-spread temporal discounting with notable differences in individual behavior. The results are consistent with similar studies in consumer behavior and raise a number of questions about the causal factors that influence temporal discounting in software engineering. As the first empirical study on intertemporal choice in SE, the paper establishes an empirical basis for understanding how software developers approach intertemporal choice and provides a blueprint for future studies.
115 - Radu Calinescu 2017
Building on concepts drawn from control theory, self-adaptive software handles environmental and internal uncertainties by dynamically adjusting its architecture and parameters in response to events such as workload changes and component failures. Self-adaptive software is increasingly expected to meet strict functional and non-functional requirements in applications from areas as diverse as manufacturing, healthcare and finance. To address this need, we introduce a methodology for the systematic ENgineering of TRUstworthy Self-adaptive sofTware (ENTRUST). ENTRUST uses a combination of (1) design-time and runtime modelling and verification, and (2) industry-adopted assurance processes to develop trustworthy self-adaptive software and assurance cases arguing the suitability of the software for its intended application. To evaluate the effectiveness of our methodology, we present a tool-supported instance of ENTRUST and its use to develop proof-of-concept self-adaptive software for embedded and service-based systems from the oceanic monitoring and e-finance domains, respectively. The experimental results show that ENTRUST can be used to engineer self-adaptive software systems in different application domains and to generate dynamic assurance cases for these systems.
Usability is an increasing concern in open source software (OSS). Given the recent changes in the OSS landscape, it is imperative to examine the OSS contributors current valued factors, practices, and challenges concerning usability. We accumulated this knowledge through a survey with a wide range of contributors to OSS applications. Through analyzing 84 survey responses, we found that many participants recognized the importance of usability. While most relied on issue tracking systems to collect user feedback, a few participants also adopted typical user-centered design methods. However, most participants demonstrated a system-centric rather than a user-centric view. Understanding the diverse needs and consolidating various feedback of end-users posed unique challenges for the OSS contributors when addressing usability in the most recent development context. Our work provided important insights for OSS practitioners and tool designers in exploring ways for promoting a user-centric mindset and improving usability practice in the current OSS communities.
The software development community has been using code quality metrics for the last five decades. Despite their wide adoption, code quality metrics have attracted a fair share of criticism. In this paper, first, we carry out a qualitative exploration by surveying software developers to gauge their opinions about current practices and potential gaps with the present set of metrics. We identify deficiencies including lack of soundness, i.e., the ability of a metric to capture a notion accurately as promised by the metric, lack of support for assessing software architecture quality, and insufficient support for assessing software testing and infrastructure. In the second part of the paper, we focus on one specific code quality metric-LCOM as a case study to explore opportunities towards improved metrics. We evaluate existing LCOM algorithms qualitatively and quantitatively to observe how closely they represent the concept of cohesion. In this pursuit, we first create eight diverse cases that any LCOM algorithm must cover and obtain their cohesion levels by a set of experienced developers and consider them as a ground truth. We show that the present set of LCOM algorithms do poorly w.r.t. these cases. To bridge the identified gap, we propose a new approach to compute LCOM and evaluate the new approach with the ground truth. We also show, using a quantitative analysis using more than 90 thousand types belonging to 261 high-quality Java repositories, the present set of methods paint a very inaccurate and misleading picture of class cohesion. We conclude that the current code quality metrics in use suffer from various deficiencies, presenting ample opportunities for the research community to address the gaps.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا