Do you want to publish a course? Click here

Memory Leak Detection Algorithms in the Cloud-based Infrastructure

66   0   0.0 ( 0 )
 Added by Anshul Jindal
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

A memory leak in an application deployed on the cloud can affect the availability and reliability of the application. Therefore, identifying and ultimately resolve it quickly is highly important. However, in the production environment running on the cloud, memory leak detection is a challenge without the knowledge of the application or its internal object allocation details. This paper addresses this challenge of detection of memory leaks in cloud-based infrastructure without having any internal knowledge by introducing two novel machine learning-based algorithms: Linear Backward Regression (LBR) and Precog and, their two variants: Linear Backward Regression with Change Points Detection (LBRCPD) and Precog with Maximum Filteration (PrecogMF). These algorithms only use one metric i.e the systems memory utilization on which the application is deployed for detection of a memory leak. The developed algorithms accuracy was tested on 60 virtual machines manually labeled memory utilization data and it was found that the proposed PrecogMF algorithm achieves the highest accuracy score of 85%. The same algorithm also achieves this by decreasing the overall compute time by 80% when compared to LBRs compute time. The paper also presents the different memory leak patterns found in the various memory leak applications and are further classified into different classes based on their visual representation.



rate research

Read More

A memory leak in an application deployed on the cloud can affect the availability and reliability of the application. Therefore, to identify and ultimately resolve it quickly is highly important. However, in the production environment running on the cloud, memory leak detection is a challenge without the knowledge of the application or its internal object allocation details. This paper addresses this challenge of online detection of memory leaks in cloud-based infrastructure without having any internal application knowledge by introducing a novel machine learning based algorithm Precog. This algorithm solely uses one metric i.e the systems memory utilization on which the application is deployed for the detection of a memory leak. The developed algorithms accuracy was tested on 60 virtual machines manually labeled memory utilization data provided by our industry partner Huawei Munich Research Center and it was found that the proposed algorithm achieves the accuracy score of 85% with less than half a second prediction time per virtual machine.
Many IoT systems are data intensive and are for the purpose of monitoring for fault detection and diagnosis of critical systems. A large volume of data steadily come out of a large number of sensors in the monitoring system. Thus, we need to consider how to store and manage these data. Existing time series databases (TSDBs) can be used for monitoring data storage, but they do not have good models for describing the data streams stored in the database. In this paper, we develop a semantic model for the specification of the monitoring data streams (time series data) in terms of which sensor generated the data stream, which metric of which entity the sensor is monitoring, what is the relation of the entity to other entities in the system, which measurement unit is used for the data stream, etc. We have also developed a tool suite, SE-TSDB, that can run on top of existing TSDBs to help establish semantic specifications for data streams and enable semantic-based data retrievals. With our semantic model for monitoring data and our SE-TSDB tool suite, users can retrieve non-existing data streams that can be automatically derived from the semantics. Users can also retrieve data streams without knowing where they are. Semantic based retrieval is especially important in a large-scale integrated IoT-Edge-Cloud system, because of its sheer quantity of data, its huge number of computing and IoT devices that may store the data, and the dynamics in data migration and evolution. With better data semantics, data streams can be more effectively tracked and flexibly retrieved to help with timely data analysis and control decision making anywhere and anytime.
This paper presents a novel application of Genetic Algorithms(GAs) to quantify the performance of Platform as a Service (PaaS), a cloud service model that plays a critical role in both industry and academia. While Cloud benchmarks are not new, in this novel concept, the authors use a GA to take advantage of the elasticity in Cloud services in a graceful manner that was not previously possible. Using Google App Engine, Heroku, and Python Anywhere with three distinct classes of client computers running our GA codebase, we quantified the completion time for application of the GA to search for the parameters of controllers for dynamical systems. Our results show statistically significant differences in PaaS performance by vendor, and also that the performance of the PaaS performance is dependent upon the client that uses it. Results also show the effectiveness of our GA in determining the level of service of PaaS providers, and for determining if the level of service of one PaaS vendor is repeatable with another. Such a concept could then increase the appeal of PaaS Cloud services by making them more financially appealing.
This paper presents results of the ongoing development of the Cloud Services Delivery Infrastructure (CSDI) that provides a basis for infrastructure centric cloud services provisioning, operation and management in multi-cloud multi-provider environment defined as a Zero Touch Provisioning, Operation and Management (ZTP/ZTPOM) model. The presented work refers to use cases from data intensive research that require high performance computation resources and large storage volumes that are typically distributed between datacenters often involving multiple cloud providers. Automation for large scale scientific (and industrial) applications should include provisioning of both inter-cloud network infrastructure and intra-cloud application resources. It should provide support for the complete application operation workflow together with the possible application infrastructure and resources changes that can occur during the application lifecycle. The authors investigate existing technologies for automation of the service provisioning and management processes aiming to cross-pollinate best practices from currently disconnected domains such as cloud based applications provisioning and multi-domain high-performance network provisioning. The paper refers to the previous and legacy research by authors, the Open Cloud eXchange (OCX), that has been proposed to address the last mile problem in cloud services delivery to campuses over trans-national backbone networks such as GEANT. OCX will serve as an integral component of the prospective ZTP infrastructure over the GEANT network. Another important component, the Marketplace, is defined for providing cloud services and applications discovery (in generally intercloud environment) and may also support additional services such as services composition and trust brokering for establishing customer-provider federations.
Gradecast is a simple three-round algorithm presented by Feldman and Micali. The current work presents a very simple algorithm that utilized Gradecast to achieve Byzantine agreement. Two small variations of the presented algorithm lead to improved algorithms for solving the Approximate agreement problem and the Multi-consensus problem. An optimal approximate agreement algorithm was presented by Fekete, which supports up to 1/4 n Byzantine nodes and has message complexity of O(n^k), where n is the number of nodes and k is the number of rounds. Our solution to the approximate agreement problem is optimal, simple and reduces the message complexity to O(k * n^3), while supporting up to 1/3 n Byzantine nodes. Multi consensus was first presented by Bar-Noy et al. It consists of consecutive executions of l Byzantine consensuses. Bar-Noy et al., show an optimal amortized solution to this problem, assuming that all nodes start each consensus instance at the same time, a property that cannot be guaranteed with early stopping. Our solution is simpler, preserves round complexity optimality, allows early stopping and does not require synchronized starts of the consensus instances.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا