Do you want to publish a course? Click here

Resource Management Schemes for Cloud-Native Platforms with Computing Containers of Docker and Kubernetes

94   0   0.0 ( 0 )
 Added by Ying Mao
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Businesses have made increasing adoption and incorporation of cloud technology into internal processes in the last decade. The cloud-based deployment provides on-demand availability without active management. More recently, the concept of cloud-native application has been proposed and represents an invaluable step toward helping organizations develop software faster and update it more frequently to achieve dramatic business outcomes. Cloud-native is an approach to build and run applications that exploit the cloud computing delivery models advantages. It is more about how applications are created and deployed than where. The container-based virtualization technology, such as Docker and Kubernetes, serves as the foundation for cloud-native applications. This paper investigates the performance of two popular computational-intensive applications, big data, and deep learning, in a cloud-native environment. We analyze the system overhead and resource usage for these applications. Through extensive experiments, we show that the completion time reduces by up to 79.4% by changing the default setting and increases by up to 96.7% due to different resource management schemes on two platforms. Additionally, the resource release is delayed by up to 116.7% across different systems. Our work can guide developers, administrators, and researchers to better design and deploy their applications by selecting and configuring a hosting platform.



rate research

Read More

479 - Ying Mao , Weifeng Yan , Yun Song 2020
With the prevalence of big-data-driven applications, such as face recognition on smartphones and tailored recommendations from Google Ads, we are on the road to a lifestyle with significantly more intelligence than ever before. For example, Aipoly Vision [1] is an object and color recognizer that helps the blind, visually impaired, and color blind understand their surroundings. At the back end side of their intelligence, various neural networks powered models are running to enable quick responses to users. Supporting those models requires lots of cloud-based computational resources, e.g. CPUs and GPUs. The cloud providers charge their clients by the amount of resources that they occupied. From clients perspective, they have to balance the budget and quality of experiences (e.g. response time). The budget leans on individual business owners and the required Quality of Experience (QoE) depends on usage scenarios of different applications, for instance, an autonomous vehicle requires realtime response, but, unlocking your smartphone can tolerate delays. However, cloud providers fail to offer a QoE based option to their clients. In this paper, we propose DQoES, a differentiate quality of experience scheduler for deep learning applications. DQoES accepts clients specification on targeted QoEs, and dynamically adjust resources to approach their targets. Through extensive, cloud-based experiments, DQoES demonstrates that it can schedule multiple concurrent jobs with respect to various QoEs and achieve up to 8x times more satisfied models compared to the existing system.
Virtualization is a promising technology that has facilitated cloud computing to become the next wave of the Internet revolution. Adopted by data centers, millions of applications that are powered by various virtual machines improve the quality of services. Although virtual machines are well-isolated among each other, they suffer from redundant boot volumes and slow provisioning time. To address limitations, containers were born to deploy and run distributed applications without launching entire virtual machines. As a dominant player, Docker is an open-source implementation of container technology. When managing a cluster of Docker containers, the management tool, Swarmkit, does not take the heterogeneities in both physical nodes and virtualized containers into consideration. The heterogeneity lies in the fact that different nodes in the cluster may have various configurations, concerning resource types and availabilities, etc., and the demands generated by services are varied, such as CPU-intensive (e.g. Clustering services) as well as memory-intensive (e.g. Web services). In this paper, we target on investigating the Docker container cluster and developed, DRAPS, a resource-aware placement scheme to boost the system performance in a heterogeneous cluster.
Serverless computing has rapidly grown following the launch of Amazons Lambda platform. Function-as-a-Service (FaaS) a key enabler of serverless computing allows an application to be decomposed into simple, standalone functions that are executed on a FaaS platform. The FaaS platform is responsible for deploying and facilitating resources to the functions. Several of todays cloud applications spread over heterogeneous connected computing resources and are highly dynamic in their structure and resource requirements. However, FaaS platforms are limited to homogeneous clusters and homogeneous functions and do not account for the data access behavior of functions before scheduling. We introduce an extension of FaaS to heterogeneous clusters and to support heterogeneous functions through a network of distributed heterogeneous target platforms called Function Delivery Network (FDN). A target platform is a combination of a cluster of homogeneous nodes and a FaaS platform on top of it. FDN provides Function-Delivery-as-a-Service (FDaaS), delivering the function to the right target platform. We showcase the opportunities such as varied target platforms characteristics, possibility of collaborative execution between multiple target platforms, and localization of data that the FDN offers in fulfilling two objectives: Service Level Objective (SLO) requirements and energy efficiency when scheduling functions by evaluating over five distributed target platforms using the FDNInspector, a tool developed by us for benchmarking distributed target platforms. Scheduling functions on an edge target platform in our evaluation reduced the overall energy consumption by 17x without violating the SLO requirements in comparison to scheduling on a high-end target platform.
90 - Ying Mao , Peizhao Hu 2020
Cloud storage plays an important role in social computing. This paper aims to develop a cloud storage management system for mobile devices to support an extended set of file operations. Because of the limit of storage, bandwidth, power consumption, and other resource restrictions, most existing cloud storage apps for smartphones do not keep local copies of files. This efficient design, however, limits the application capacities. In this paper, we attempt to extend the available file operations for cloud storage service to better serve smartphone users. We develop an efficient and secure file management system, Skyfiles, to support more advanced file operations. The basic idea of our design is to utilize cloud instances to assist file operations. Particularly, Skyfiles supports downloading, compressing, encrypting, and converting operations, as well as file transfer between two smartphone users cloud storage spaces. In addition, we propose a protocol for users to share their idle instances. All file operations supported by Skyfiles can be efficiently and securely accomplished with either a self-created instance or shared instance.
Nowadays cloud computing adoption as a form of hosted application and services is widespread due to decreasing costs of hardware, software, and maintenance. Cloud enables access to a shared pool of virtual resources hosted in large energy-hungry data centers for diverse information and communication services with dynamic workloads. The huge energy consumption of cloud data centers results in high electricity bills as well as emission of a large amount of carbon dioxide gas. Needless to say, efficient resource management in cloud environments has become one of the most important priorities of cloud providers and consequently has increased the interest of researchers to propose novel energy saving solutions. This chapter presents a scientific and taxonomic survey of recent energy efficient cloud resource management solutions in cloud environments. The main objective of this study is to propose a novel complete taxonomy for energy-efficient cloud resource management solutions, review recent research advancements in this area, classify the existing techniques based on our proposed taxonomy, and open up new research directions. Besides, it reviews and surveys the literature in the range of 2015 through 2021 in the subject of energy-efficient cloud resource management techniques and maps them to its proposed taxonomy, which unveils novel research directions and facilitates the conduction of future researches.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا