ترغب بنشر مسار تعليمي؟ اضغط هنا

A Study of Efficient Energy Management Techniques for Cloud Computing Environment

145   0   0.0 ( 0 )
 نشر من قبل Mansaf Alam Dr
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The overall performance of the development of computing systems has been engrossed on enhancing demand from the client and enterprise domains. but, the intake of ever-increasing energy for computing systems has commenced to bound in increasing overall performance due to heavy electric payments and carbon dioxide emission. The growth in power consumption of server is increased continuously, and many researchers proposed, if this pattern repeats continuously, then the power consumption cost of a server over its lifespan would be higher than its hardware prices. The power intake troubles more for clusters, grids, and clouds, which encompass numerous thousand heterogeneous servers. Continuous efforts have been done to reduce the electricity intake of these massive-scale infrastructures. To identify the challenges and required future enhancements in the field of efficient energy consumption in Cloud Computing, it is necessary to synthesize and categorize the research and development done so far. In this paper, the authors discuss the reasons and problems associated with huge energy consumption by Cloud data centres and prepare a taxonomy of huge energy consumption problems and its related solutions. The authors cover all aspects of energy consumption by Cloud data centers and analyze many research papers to find the better solution for efficient energy consumption. This work gives an overall information regarding energy-consumption problems of Cloud data centres and energy-efficient solutions for this problem. The paper is concluded with a conversation of future enhancement and development in energy-efficient methods in Cloud Computing



قيم البحث

اقرأ أيضاً

Nowadays cloud computing adoption as a form of hosted application and services is widespread due to decreasing costs of hardware, software, and maintenance. Cloud enables access to a shared pool of virtual resources hosted in large energy-hungry data centers for diverse information and communication services with dynamic workloads. The huge energy consumption of cloud data centers results in high electricity bills as well as emission of a large amount of carbon dioxide gas. Needless to say, efficient resource management in cloud environments has become one of the most important priorities of cloud providers and consequently has increased the interest of researchers to propose novel energy saving solutions. This chapter presents a scientific and taxonomic survey of recent energy efficient cloud resource management solutions in cloud environments. The main objective of this study is to propose a novel complete taxonomy for energy-efficient cloud resource management solutions, review recent research advancements in this area, classify the existing techniques based on our proposed taxonomy, and open up new research directions. Besides, it reviews and surveys the literature in the range of 2015 through 2021 in the subject of energy-efficient cloud resource management techniques and maps them to its proposed taxonomy, which unveils novel research directions and facilitates the conduction of future researches.
Internet of Things (IoT) is an Internet-based environment of connected devices and applications. IoT creates an environment where physical devices and sensors are flawlessly combined into information nodes to deliver innovative and smart services for human-being to make their life easier and more efficient. The main objective of the IoT devices-network is to generate data, which are converted into useful information by the data analysis process, it also provides useful resources to the end users. IoT resource management is a key challenge to ensure the quality of end user experience. Many IoT smart devices and technologies like sensors, actuators, RFID, UMTS, 3G, and GSM etc. are used to develop IoT networks. Cloud Computing plays an important role in these networks deployment by providing physical resources as virtualized resources consist of memory, computation power, network bandwidth, virtualized system and device drivers in secure and pay as per use basis. One of the major concerns of Cloud-based IoT environment is resource management, which ensures efficient resource utilization, load balancing, reduce SLA violation, and improve the system performance by reducing operational cost and energy consumption. Many researchers have been proposed IoT based resource management techniques. The focus of this paper is to investigate these proposed resource allocation techniques and finds which parameters must be considered for improvement in resource allocation for IoT networks. Further, this paper also uncovered challenges and issues of Cloud-based resource allocation for IoT environment.
Fog/Edge computing model allows harnessing of resources in the proximity of the Internet of Things (IoT) devices to support various types of real-time IoT applications. However, due to the mobility of users and a wide range of IoT applications with d ifferent requirements, it is a challenging issue to satisfy these applications requirements. The execution of IoT applications exclusively on one fog/edge server may not be always feasible due to limited resources, while execution of IoT applications on different servers needs further collaboration among servers. Also, considering user mobility, some modules of each IoT application may require migration to other servers for execution, leading to service interruption and extra execution costs. In this article, we propose a new weighted cost model for hierarchical fog computing environments, in terms of the response time of IoT applications and energy consumption of IoT devices, to minimize the cost of running IoT applications and potential migrations. Besides, a distributed clustering technique is proposed to enable the collaborative execution of tasks, emitted from application modules, among servers. Also, we propose an application placement technique to minimize the overall cost of executing IoT applications on multiple servers in a distributed manner. Furthermore, a distributed migration management technique is proposed for the potential migration of applications modules to other remote servers as the users move along their path. Besides, failure recovery methods are embedded in the clustering, application placement, and migration management techniques to recover from unpredicted failures. The performance results show that our technique significantly improves its counterparts in terms of placement deployment time, average execution cost of tasks, total number of migrations, total number of interrupted tasks, and cumulative migration cost.
197 - Gauri Joshi , Emina Soljanin , 2015
In cloud computing systems, assigning a task to multiple servers and waiting for the earliest copy to finish is an effective method to combat the variability in response time of individual servers, and reduce latency. But adding redundancy may result in higher cost of computing resources, as well as an increase in queueing delay due to higher traffic load. This work helps understand when and how redundancy gives a cost-efficient reduction in latency. For a general task service time distribution, we compare different redundancy strategies in terms of the number of redundant tasks, and time when they are issued and canceled. We get the insight that the log-concavity of the task service time creates a dichotomy of when adding redundancy helps. If the service time distribution is log-convex (i.e. log of the tail probability is convex) then adding maximum redundancy reduces both latency and cost. And if it is log-concave (i.e. log of the tail probability is concave), then less redundancy, and early cancellation of redundant tasks is more effective. Using these insights, we design a general redundancy strategy that achieves a good latency-cost trade-off for an arbitrary service time distribution. This work also generalizes and extends some results in the analysis of fork-join queues.
With the emergence of the big data age, the issue of how to obtain valuable knowledge from a dataset efficiently and accurately has attracted increasingly attention from both academia and industry. This paper presents a Parallel Random Forest (PRF) a lgorithm for big data on the Apache Spark platform. The PRF algorithm is optimized based on a hybrid approach combining data-parallel and task-parallel optimization. From the perspective of data-parallel optimization, a vertical data-partitioning method is performed to reduce the data communication cost effectively, and a data-multiplexing method is performed is performed to allow the training dataset to be reused and diminish the volume of data. From the perspective of task-parallel optimization, a dual parallel approach is carried out in the training process of RF, and a task Directed Acyclic Graph (DAG) is created according to the parallel training process of PRF and the dependence of the Resilient Distributed Datasets (RDD) objects. Then, different task schedulers are invoked for the tasks in the DAG. Moreover, to improve the algorithms accuracy for large, high-dimensional, and noisy data, we perform a dimension-reduction approach in the training process and a weighted voting approach in the prediction process prior to parallelization. Extensive experimental results indicate the superiority and notable advantages of the PRF algorithm over the relevant algorithms implemented by Spark MLlib and other studies in terms of the classification accuracy, performance, and scalability.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا