No Arabic abstract
Nowadays cloud computing adoption as a form of hosted application and services is widespread due to decreasing costs of hardware, software, and maintenance. Cloud enables access to a shared pool of virtual resources hosted in large energy-hungry data centers for diverse information and communication services with dynamic workloads. The huge energy consumption of cloud data centers results in high electricity bills as well as emission of a large amount of carbon dioxide gas. Needless to say, efficient resource management in cloud environments has become one of the most important priorities of cloud providers and consequently has increased the interest of researchers to propose novel energy saving solutions. This chapter presents a scientific and taxonomic survey of recent energy efficient cloud resource management solutions in cloud environments. The main objective of this study is to propose a novel complete taxonomy for energy-efficient cloud resource management solutions, review recent research advancements in this area, classify the existing techniques based on our proposed taxonomy, and open up new research directions. Besides, it reviews and surveys the literature in the range of 2015 through 2021 in the subject of energy-efficient cloud resource management techniques and maps them to its proposed taxonomy, which unveils novel research directions and facilitates the conduction of future researches.
The overall performance of the development of computing systems has been engrossed on enhancing demand from the client and enterprise domains. but, the intake of ever-increasing energy for computing systems has commenced to bound in increasing overall performance due to heavy electric payments and carbon dioxide emission. The growth in power consumption of server is increased continuously, and many researchers proposed, if this pattern repeats continuously, then the power consumption cost of a server over its lifespan would be higher than its hardware prices. The power intake troubles more for clusters, grids, and clouds, which encompass numerous thousand heterogeneous servers. Continuous efforts have been done to reduce the electricity intake of these massive-scale infrastructures. To identify the challenges and required future enhancements in the field of efficient energy consumption in Cloud Computing, it is necessary to synthesize and categorize the research and development done so far. In this paper, the authors discuss the reasons and problems associated with huge energy consumption by Cloud data centres and prepare a taxonomy of huge energy consumption problems and its related solutions. The authors cover all aspects of energy consumption by Cloud data centers and analyze many research papers to find the better solution for efficient energy consumption. This work gives an overall information regarding energy-consumption problems of Cloud data centres and energy-efficient solutions for this problem. The paper is concluded with a conversation of future enhancement and development in energy-efficient methods in Cloud Computing
Serverless computing has emerged as an attractive deployment option for cloud applications in recent times. The unique features of this computing model include, rapid auto-scaling, strong isolation, fine-grained billing options and access to a massive service ecosystem which autonomously handles resource management decisions. This model is increasingly being explored for deployments in geographically distributed edge and fog computing networks as well, due to these characteristics. Effective management of computing resources has always gained a lot of attention among researchers. The need to automate the entire process of resource provisioning, allocation, scheduling, monitoring and scaling, has resulted in the need for specialized focus on resource management under the serverless model. In this article, we identify the major aspects covering the broader concept of resource management in serverless environments and propose a taxonomy of elements which influence these aspects, encompassing characteristics of system design, workload attributes and stakeholder expectations. We take a holistic view on serverless environments deployed across edge, fog and cloud computing networks. We also analyse existing works discussing aspects of serverless resource management using this taxonomy. This article further identifies gaps in literature and highlights future research directions for improving capabilities of this computing model.
Cloud computing has rapidly emerged as model for delivering Internet-based utility computing services. In cloud computing, Infrastructure as a Service (IaaS) is one of the most important and rapidly growing fields. Cloud providers provide users/machines resources such as virtual machines, raw (block) storage, firewalls, load balancers, and network devices in this service model. One of the most important aspects of cloud computing for IaaS is resource management. Scalability, quality of service, optimum utility, reduced overheads, increased throughput, reduced latency, specialised environment, cost effectiveness, and a streamlined interface are some of the advantages of resource management for IaaS in cloud computing. Traditionally, resource management has been done through static policies, which impose certain limitations in various dynamic scenarios, prompting cloud service providers to adopt data-driven, machine-learning-based approaches. Machine learning is being used to handle a variety of resource management tasks, including workload estimation, task scheduling, VM consolidation, resource optimization, and energy optimization, among others. This paper provides a detailed review of challenges in ML-based resource management in current research, as well as current approaches to resolve these challenges, as well as their advantages and limitations. Finally, we propose potential future research directions based on identified challenges and limitations in current research.
Fog/Edge computing model allows harnessing of resources in the proximity of the Internet of Things (IoT) devices to support various types of real-time IoT applications. However, due to the mobility of users and a wide range of IoT applications with different requirements, it is a challenging issue to satisfy these applications requirements. The execution of IoT applications exclusively on one fog/edge server may not be always feasible due to limited resources, while execution of IoT applications on different servers needs further collaboration among servers. Also, considering user mobility, some modules of each IoT application may require migration to other servers for execution, leading to service interruption and extra execution costs. In this article, we propose a new weighted cost model for hierarchical fog computing environments, in terms of the response time of IoT applications and energy consumption of IoT devices, to minimize the cost of running IoT applications and potential migrations. Besides, a distributed clustering technique is proposed to enable the collaborative execution of tasks, emitted from application modules, among servers. Also, we propose an application placement technique to minimize the overall cost of executing IoT applications on multiple servers in a distributed manner. Furthermore, a distributed migration management technique is proposed for the potential migration of applications modules to other remote servers as the users move along their path. Besides, failure recovery methods are embedded in the clustering, application placement, and migration management techniques to recover from unpredicted failures. The performance results show that our technique significantly improves its counterparts in terms of placement deployment time, average execution cost of tasks, total number of migrations, total number of interrupted tasks, and cumulative migration cost.
Edge/Fog computing is a novel computing paradigm that provides resource-limited Internet of Things (IoT) devices with scalable computing and storage resources. Compared to cloud computing, edge/fog servers have fewer resources, but they can be accessed with higher bandwidth and less communication latency. Thus, integrating edge/fog and cloud infrastructures can support the execution of diverse latency-sensitive and computation-intensive IoT applications. Although some frameworks attempt to provide such integration, there are still several challenges to be addressed, such as dynamic scheduling of different IoT applications, scalability mechanisms, multi-platform support, and supporting different interaction models. FogBus2, as a new python-based framework, offers a lightweight and distributed container-based framework to overcome these challenges. In this chapter, we highlight key features of the FogBus2 framework alongside describing its main components. Besides, we provide a step-by-step guideline to set up an integrated computing environment, containing multiple cloud service providers (Hybrid-cloud) and edge devices, which is a prerequisite for any IoT application scenario. To obtain this, a low-overhead communication network among all computing resources is initiated by the provided scripts and configuration files. Next, we provide instructions and corresponding code snippets to install and run the main framework and its integrated applications. Finally, we demonstrate how to implement and integrate several new IoT applications and custom scheduling and scalability policies with the FogBus2 framework.