ترغب بنشر مسار تعليمي؟ اضغط هنا

Localizing Faults in Cloud Systems

55   0   0.0 ( 0 )
 نشر من قبل Cristina Monni
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

By leveraging large clusters of commodity hardware, the Cloud offers great opportunities to optimize the operative costs of software systems, but impacts significantly on the reliability of software applications. The lack of control of applications over Cloud execution environments largely limits the applicability of state-of-the-art approaches that address reliability issues by relying on heavyweight training with injected faults. In this paper, we propose emph(LOUD}, a lightweight fault localization approach that relies on positive training only, and can thus operate within the constraints of Cloud systems. emph{LOUD} relies on machine learning and graph theory. It trains machine learning models with correct executions only, and compensates the inaccuracy that derives from training with positive samples, by elaborating the outcome of machine learning techniques with graph theory algorithms. The experimental results reported in this paper confirm that emph{LOUD} can localize faults with high precision, by relying only on a lightweight positive training.



قيم البحث

اقرأ أيضاً

118 - Kim G. Larsen 2013
This volume contains the proceedings of the first workshop on Advances in Systems of Systems (AISOS13), held in Roma, Italy, March 16. System-of-Systems describes the large scale integration of many independent self-contained systems to satisfy globa l needs or multi-system requests. Examples are smart grid, intelligent buildings, smart cities, transport systems, etc. There is a need for new modeling formalisms, analysis methods and tools to help make trade-off decisions during design and evolution avoiding leading to sub-optimal design and rework during integration and in service. The workshop should focus on the modeling and analysis of System of Systems. AISOS13 aims to gather people from different communities in order to encourage exchange of methods and views.
116 - Nilay Oza 2013
Cloud-based infrastructure has been increasingly adopted by the industry in distributed software development (DSD) environments. Its proponents claim that its several benefits include reduced cost, increased speed and greater productivity in software development. Empirical evaluations, however, are in the nascent stage of examining both the benefits and the risks of cloud-based infrastructure. The objective of this paper is to identify potential benefits and risks of using cloud in a DSD project conducted by teams based in Helsinki and Madrid. A cross-case qualitative analysis is performed based on focus groups conducted at the Helsinki and Madrid sites. Participants observations are used to supplement the analysis. The results of the analysis indicated that the main benefits of using cloud are rapid development, continuous integration, cost savings, code sharing, and faster ramp-up. The key risks determined by the project are dependencies, unavailability of access to the cloud, code commitment and integration, technical debt, and additional support costs. The results revealed that if such environments are not planned and set up carefully, the benefits of using cloud in DSD projects might be overshadowed by the risks associated with it.
Migration to the cloud has been a popular topic in industry and academia in recent years. Despite many benefits that the cloud presents, such as high availability and scalability, most of the on-premise application architectures are not ready to full y exploit the benefits of this environment, and adapting them to this environment is a non-trivial task. Microservices have appeared recently as novel architectural styles that are native to the cloud. These cloud-native architectures can facilitate migrating on-premise architectures to fully benefit from the cloud environments because non-functional attributes, like scalability, are inherent in this style. The existing approaches on cloud migration does not mostly consider cloud-native architectures as their first-class citizens. As a result, the final product may not meet its primary drivers for migration. In this paper, we intend to report our experience and lessons learned in an ongoing project on migrating a monolithic on-premise software architecture to microservices. We concluded that microservices is not a one-fit-all solution as it introduces new complexities to the system, and many factors, such as distribution complexities, should be considered before adopting this style. However, if adopted in a context that needs high flexibility in terms of scalability and availability, it can deliver its promised benefits.
Many techniques were proposed for detecting software misconfigurations in cloud systems and for diagnosing unintended behavior caused by such misconfigurations. Detection and diagnosis are steps in the right direction: misconfigurations cause many co stly failures and severe performance issues. But, we argue that continued focus on detection and diagnosis is symptomatic of a more serious problem: configuration design and implementation are not yet first-class software engineering endeavors in cloud systems. Little is known about how and why developers evolve configuration design and implementation, and the challenges that they face in doing so. This paper presents a source-code level study of the evolution of configuration design and implementation in cloud systems. Our goal is to understand the rationale and developer practices for revising initial configuration design/implementation decisions, especially in response to consequences of misconfigurations. To this end, we studied 1178 configuration-related commits from a 2.5 year version-control history of four large-scale, actively-maintained open-source cloud systems (HDFS, HBase, Spark, and Cassandra). We derive new insights into the software configuration engineering process. Our results motivate new techniques for proactively reducing misconfigurations by improving the configuration design and implementation process in cloud systems. We highlight a number of future research directions.
In past years, cloud storage systems saw an enormous rise in usage. However, despite their popularity and importance as underlying infrastructure for more complex cloud services, todays cloud storage systems do not account for compliance with regulat ory, organizational, or contractual data handling requirements by design. Since legislation increasingly responds to rising data protection and privacy concerns, complying with data handling requirements becomes a crucial property for cloud storage systems. We present PRADA, a practical approach to account for compliance with data handling requirements in key-value based cloud storage systems. To achieve this goal, PRADA introduces a transparent data handling layer, which empowers clients to request specific data handling requirements and enables operators of cloud storage systems to comply with them. We implement PRADA on top of the distributed database Cassandra and show in our evaluation that complying with data handling requirements in cloud storage systems is practical in real-world cloud deployments as used for microblogging, data sharing in the Internet of Things, and distributed email storage.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا